Ask HN: Anyone else experiencing lower quality in Claude Code since a few weeks?
Hi,
I am on the mid-plan (paid) and using the web interface and in the last weeks the quality of Claude's output quality decreased dramatically. The model used is Sonnet 4.5 and my workflow is copy/pasting code and inspecting the diffs.
It changes code it shouldn't change, even makes syntax errors (like a character different than the one provided in a source file).
I know these LLMs change all the time, and this is just an anecdote, but I am interested what others think. Are you based in the US? I've heard that compute capacity peaks during US working hours, and access may be degraded during that time, for example, through dynamic quantization [1]. [1] https://www.seangoedecke.com/ai-is-good-news-for-australian-... No idea, I haven’t used CC in a while but I’ve noticed a general pattern with these AI tools where they seem to start strong then slowly degrade over time. Could be cost optimisation through some sort of resource throttling. Most of these companies aren’t profitable so it wouldn’t surprise me if they’re quietly tuning things down to save compute. Yes, that's a good point. I think they underestimate the impact this has on users. for me it got dramatically worse. it cant read whole files of concatenated code anymore. seems like its attention is much smaller. doing lots of piecemeal behind-the-scenes cmd line stuff now instead of just seeming to "understand" everything in one go. somewhat catastrophic for me. so bad that i was surprised to discover that chatgpt 5 is still worse