Ask HN: Do you get worst answers from OpenAI API compared to ChatGPT?
Yes. Attempts to match api with the chat interface have proved futile. The api never seems to behave in the same way as the chat interface.
Have you tried tweaking parameters like temperature, top_p, or seed value when sending the API request?
Beyond that and due to the probabilistic nature of the LLM response I'm not sure how a reproducible "matching" between chat interface and API could be achieved.
I'm working primarily with the API through my own wrapper and I noticed that I tend to give less detailed instructions than when I'm using the OpenAI chat interface often resulting in a less accurate response.
Thanks. I will. I must confess, I have not tried the top_p or seed value. I guess my naive belief was that the the defaults in the api matched the OOTB experience of the web interface. shame on me
It works pretty well for me. Is MacGPT. Maybe mess with your prompt.
yes, I've experienced this via their api. Although the GPT-4 model is a tad bit better in its responses compared to the turbo model