OpenAI models answer an older prompt instead of the current one

4 min read Original article ↗

There is a strange failure mode that I have been noticing in OpenAI models: in a longer session, the model can suddenly answer an earlier prompt instead of the one right in front of it.

You can be done with topic A, move on to something else (B), and then suddenly get a reply that clearly belongs to A. Sometimes the model even handles the new topic correctly at first, which makes the jump back even more confusing.

This seems to be not just a weak answer or a misunderstanding. It looks more like the model briefly loses track of which part of the conversation it is supposed to be responding to.

The problem is not only that the answer is wrong. The problem is that it appears to be aimed at the wrong place in the conversation.

What other people report

Based on issue reports and discussions from others around this, the problem does seem to be real. Multiple people have described the same general behavior: older prompts resurfacing, stale answers appearing in the middle of a newer discussion, and the model drifting back to a previous part of the conversation.

I found most reports are Codex-related, but that does not necessarily mean the problem belongs only there. Those environments simply make the issue easier to notice because people are working through longer, more structured sessions and can tell more clearly when a reply belongs to an earlier step.

So I don't think it is solely a Codex problem and also not a coding-only problem. It is a conversation-tracking problem that becomes especially visible in longer and more demanding sessions.

Where the problem likely sits

There is still no clean public answer on the root cause.

The reports do not point to one single broken CLI version. They also do not support the idea that one model family is completely unaffected. Similar complaints show up across several GPT-5.x variants, including both -codex and non--codex models.

So this does not look like one bad release, one bad setting, or one bad wrapper. It looks more like an underlying model behavior that becomes easier to trigger in certain environments.

What to do about it

There is no clear public fix yet.

You can obviously keep sessions shorter, so split unrelated topics into separate threads.

The larger point

The interesting part of this issue is not that the model gets things wrong. Everyone already knows that models get things wrong.

What stands out here is the type of mistake. When a model answers an older prompt while presenting the reply as if it were about the current one, this feels just totally off and makes you loose all trust in the model as continuity is one of the basic things people expect.

Links

Here are the main links that support the points above:

Subscribe to my Newsletter

Get the latest updates delivered straight to your inbox

I respect your privacy. Unsubscribe at any time.