Settings

Theme

Show HN: Next.js ChatGPT – Responsive chat application powered by GPT-4

github.com

126 points by enricoros 3 years ago · 35 comments

Reader

enricorosOP 3 years ago

What does it take to make a basic ChatGPT-like frontend, with code highlighting, run in sandbox, drop-files, and 'acting' in prompts? Clone away and enjoy. First time poster

  • anonzzzies 3 years ago

    Thanks, it is good :) You don’t need the serverside though; you can just call openai apis straight from the client. Makes things easier!

    • capableweb 3 years ago

      It seems to support both? First it tries to load the key from a environment variable, and if it cannot, it'll ask for it client-side.

      None the less, if you're building a project for others, you most likely don't want the secret key to be public, which it'd be if you embed it in the client-side code.

    • yodon 3 years ago

      If you call the openai API's straight from the FrontEnd, you are likely leaking your api keys to visitors who can then use your api keys (and api key limits) for their own purposes.

      • anonzzzies 3 years ago

        Nah, you ask them to enter their own; if you use yours, then yes, only use backend. In this case, the author is not using his, you have to bring your own, so frontend is fine.

  • nico 3 years ago

    What do you mean by “acting”?

    That it pretends to be a Scientist? Or that it can perform actions (like sending an email or searching the web)?

    If the latter, do you have more info/docs about it? Didn’t see it on the roadmap.

    • capableweb 3 years ago

      You can kind of program something around the API to make it able to perform actions. That's the way I'm guessing Microsoft did it with Bing. Here is an example:

      System prompt:

          - You are a text-based AI delegates to other AIs.
          - You only respond in JSON with two keys `delegate_to` and `parameters`.
          - `delegate_to` defines what other AI you should delegate to.
          - `parameters` are parameters to send to that AI.
          - Here is the list of possible delegates
          - delegate_to: email_ai, parameters: $to $subject $body, used for sending emails
          - delegate_to: calendar_ai, parameters: $time, $date, $title, used for creating events in a calendar
          - delegate_to: search_ai, parameters: $search-term
          - Be creative and helpful.
          - Your goal is to take user input and successfully delegate the task you receive from the user to the correct AI.
          - Your initial message to the user should welcome them and ask them what you can do for them.
          
      Example conversation:

          > {"message": "Welcome! I'm your AI assistant. What can I do for you today?"}
          
          > User: Send email to john@example.com asking if he wants to have dinner tonight
          
          {
            "delegate_to": "email_ai",
            "parameters": {
              "to": "john@example.com",
              "subject": "Dinner Invitation",
              "body": "Hi John, would you like to have dinner together tonight? Let me know your thoughts. Best regards."
            }
          }
      
      Then you'd make your program read the JSON messages it generates and perform the correct action.
      • nico 3 years ago

        Nice. How would you go about providing a dynamic list of delgates? Could it work to just give one delegate that can provide a list of delegates with a description of the actions they can perform (then that delegate can query a db and return a list)?

        Re-reading, I’m guessing the prompt could also be dynamically generated to include the most relevant delegates.

        • sprobertson 3 years ago

          > the prompt could also be dynamically generated to include the most relevant delegates

          Yup that's how I'm doing it - the system prompt is re-generated for every request, and that includes getting a list of available delegates and the arguments they accept. I only have 10 so I'm just listing all of them, but if you had some huge number you could combine that with embeddings / vector lookup.

        • behnamoh 3 years ago

          At that point we're basically back to the `AI is just nested if-else expressions` story. The only difference is that now there is a language reader on top that understands the semantic of your language. But actors (or agents in LangChain lingo) are just if-else. The tools you connect them to must be developed separately.

          • nico 3 years ago

            Sure, you could also say that human language/action capacity is just a biological LLM with some ifs on top that give it access to actions.

            In the case you describe, you can have an LLM write the tools.

            Yes, the first tools and bridge code might need to be manually built. But after that it could be LLMs all the way down.

            Kind of similar to writing a new programming language. At first you write it in another language but after compiling it for the first time, you can then write the language in the language itself.

          • enricorosOP 3 years ago

            Very good point. Once you start breaking down a llm into presets/delegators, you introduce basically if-else, with all the problems of that split. Lack of visibility, local vs global optimization, lack of control and predictability, asymmetry of information. I wonder if the current Agents approach is a stopgap solution.

  • justplay 3 years ago
tonyoconnell 3 years ago

Here's one made with Astro https://github.com/ddiu8081/chatgpt-demo There's a fork in Chinese somewhere with loads more features.

yayr 3 years ago

nice, maybe you can make the initializing prompts in

https://github.com/enricoros/nextjs-chatgpt-app/blob/main/pa...

transparent to the user and even changeable by them.

  • enricorosOP 3 years ago

    When the user selects one of those, any query will reveal the prompt. Can be changed but the change won't be persisted yet. We added a 'Custom' preset today that requires editing. Agree with your point tho - rn editing happens via 'forking' :)

phil42 3 years ago

But I need an GPT-4 api key for that, right? The normal API keys don't work?

  • capableweb 3 years ago

    The keys for GPT3 (`GPT-3.5-turbo` is the actual model ID) is the same as for GPT-4. You define the model when you make the request to the API.

    • layer8 3 years ago

      Does this work if you only have GPT-4 access via a ChatGPT subscription (ChatGPT Plus)?

      • fredliu 3 years ago

        It doesn't. Even if you have ChatGPT Plus, the key you have only supports 3.5 unless you are explicitly given the gpt-4 key.

        • capableweb 3 years ago

          It's true that you need explicit access to the GPT-4 to use the API, but again, it's not a different key. I'm using the same API key for accessing `gpt-3.5-turbo` as I use for `gpt-4`.

        • fredliu 3 years ago

          There seems to be already a PR for adding 3.5 support. The community and speed of change in this field is mind blowing!

          • capableweb 3 years ago

            Oh yeah, changing 33 lines is truly revolutionary!

            • enricorosOP 3 years ago

              OP: I went to sleep with this as my 1st post and 1 star, and woke up with a PR for 3.5-Turbo pending. Community for the win!

  • tonyoconnell 3 years ago

    Seems to fall back to GPT 3 if you haven't got access to GPT 4 API yet

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection