The Fraud of Big AI
We're being gaslit. Claude 4.5 Sonnet could write complete patches at launch. Now the context window is so destroyed it can't finish most prompts without truncating. Every major model follows the same pattern: amazing launch, then silent degradation over weeks. OpenAI admitted to changing GPT-4 without disclosure. Anthropic won't even acknowledge it.
They announce every 2% improvement but never mention when they swap in cheaper inference or add output limits. This isn't confirmation bias when the model literally cannot complete tasks it did a month ago. Are we just supposed to accept that the product we're paying for gets worse while the marketing gets louder? How is this not a bait and switch? Either give us API version pinning for the actual model weights or admit you're optimizing for profit over capability. I would just put it this way, whenever I get the title is x "characters too long" I ask ChatGPT to shorten the text so that is x characters shorter or less. And it can never do it in less 5 iterations. It's a pretty straightforward request.