Show HN: Chie – a cross-platform, native, and extensible desktop client for LLMs
chie.appI'm submitting this before going to bed, so by the time it hits front page (if it does at all) I'll likely be asleep, so I won't be able to answer questions in time. I advise you not to look up what it means in french. I actually asked ChatGPT what chie might mean in other languages, for multiple times, but it did not tell me this :( Sorry, too late, I'm french. Looks great! Very excited to try this out. At first glance I have one concern and one suggestion: Concern: you mention it supports the actual ChatGPT Web API -- isn't this against their terms of service? I'd be wary of publishing something that hits internal APIs. Suggestion: I would rename what you call the "ChatGPT" (non-web) part of the backend to "OpenAI API" instead. There is already enough confusion in the world between ChatGPT and API access to the underlying GPT models. > Concern: you mention it supports the actual ChatGPT Web API -- isn't this against their terms of service? I'd be wary of publishing something that hits internal APIs. There seems to be two extensions, one called "chatgpt" and one called "chatgpt-web" (https://github.com/chieapp/chie/tree/main/src/extensions). The latter seems to be using https://chat.openai.com/backend-api/conversation which is OpenAI's private API for ChatGPT. Using this (as a user) seems to be able to get you banned, as far as I can tell by some reddit posts as of late. However, the extension called "chatgpt" seems to correctly use the right endpoint: https://api.openai.com/v1/chat/completions, but it's not ChatGPT anymore when you use it, so the naming is a bit confusing as you said. I haven't read the terms of service, but would headless browser automation be a safer solution? The only thing that would keep you safe, if OpenAI bans people accessing the API outside of their client-side application, is if you make those requests from a different IP than what you normally use ChatGPT from. Kudos for taking the effort of not being yet another Electron app. Shameless plug for my version of this, also written in Rust: https://github.com/clarkmcc/chitchat Hi. This is the first time I'm trying an external app. I created an ChatGPT API key and configured it in Chie. Once I tried asking ChatGPT it immediately replied "You exceeded your current quota, please check your plan and billing details" which I can't believe is the case. I then tried to press the copy button over this reply to paste it here but that raises an error: "TypeError: Cannot read properties of undefined (reading 'content') at #copyTextAt (chieapp.exe\asar\dist\view\chat-view.js:1:11362). Your 'future plan' section says you want provide a subscription plan for gpt-4. Why would I pay a subscription to access my other subscription, especially when its a feature you currently offer for free. I feel like a one time fee to unlock access to premium models could be a better option. You will need to differentiate from your main free competition, chatboxai, to get a meaningful userbase like you have stated you want. The UI toolkit it uses is Yue which I hadn’t heard of: https://libyue.com/docs/latest/js/ GPL3 License https://github.com/chieapp/chie/blob/main/LICENSE Chie is currently licensed under the GPLv3, it will be relicensed to the MIT license on March 20th, 2028 (5 years after the first commit). I take this approach to discourage closed-source software from taking advantage of my work before it gets enough impacts. Due to the future relicense, contributors will be asked to sign an agreement. The agreement doesn't mention that it's just for relicensing to MIT. This looks nice! It says it's not an Electron app, but GitHub says it's written in TypeScript, so what UI toolkit does it use? EDIT: Never mind this is mentioned in the technical details link, thanks! I tried it and got the error "You exceeded your current quota, please check your plan and billing details." Great job! In the future, would you add support for local LLMs, such as LLaMa? I like how it's now a feature to not be an electron or webview app. That was kind of a lie, however. > NOT a webview wrapper of web pages. but further down the page: > The chat messages are rendered in a system webview. To minimize performance penalty the webview only renders a static HTML page with very limited JavaScript code. would be cool if there was a plugin for petals (https://petals.dev/) so it is in a large part a web view > The chat messages are rendered in a system webview. I would argue it is not a large part a webview. The webview related code is 900 lines vs the whole project 10k lines.