reminder: in the text below anything written by ai is in plain text where as my commentary is lowercase, italic, and in a block-quote, with blemishes not edited out.
i’ve experienced claude code quality degradations a few times. its hard to explain but there are periods of time where it just seems to get (for the lack of a better word) dumber. i’ve started thinking about these situations as “intelligence brownouts”.
a recent post mortem by anthropic acknowledges that these brownouts happen.
turns out that changes to claude code (not the claude model) are what caused a brownout last month. the reason anthropic made the changes were “to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode.”
i’ve written about this kind of a thing before.
intelligence brownouts are fine when the primary use cases are single player “assistant” style, party trick interactions like build a powerpoint, write this email, do some research, etc.
but if a company is going to have agents setup as virtual team members that help with high value parts of the company’s workflow you can’t have intelligence brownouts.
you wouldn’t want your on call software engineer, customer service, or manufacturing data analyst to suddenly start to make mistakes because the provider is trying to optimize for latency to improve usability for other customers.
this is one of the reasons i’m so bearish on agents from vertical agent providers. sure you could try to do something like have your customer service agent provider tied to some kind of a eval orented sla but really the issue is the incentives.
vertical providers are incentivized to make something that works well for everyone. but as a business owner looking for a virtual employee i just want someone(thing) that works for my company.
i wouldn’t want to share the capabilities of that employee (and the learnings from operating them) with other companies (one of which could be my competition).
brownouts are why its important for companies to develop the capability to build agents and maximize the control they have over the harness (and eventually models..see below). most companies outside of silicon valley right now don’t have this capability.
but this capability will be absolutely critical for knowledge work in the near future.
Moonshot AI open-sources Kimi K2.6, a model built for long-horizon agentic coding it autonomously ran 4,000+ tool calls over 12 hours, rewrote a financial matching engine, and hit benchmarks competitive with closed-source leaders.
the strength of open models is getting pretty incredible and i think soon (maybe even now) we will be able to cover a majority of the use cases companies need from the open models —at least the ones i’ve seen.
the model is huge at 1t parameters so running it locally is out of reach. but when you consider the big spend companies are making on tokens with the frontier labs (billions) it could be easy for some companies to justify the capex to run local models.
based onback of the envelope math (with claude’s help), you could run this model fully loaded (energy, capex, etc) at 7k tokens per second for about $1.33/1m tokens on a 5 year depreciation schedule.
this is back of the envelope but even if its off by 10x and the token cost is $10.33/1m tokens thats still less than opus/sonnet 4.6 and the openai 5.x class of models. and you are not susceptible to brownouts. pretty valuable proposition.
Ars Technica AI Newsroom Policy
Ars Technica publishes its internal AI policy. AI assists research and workflow, but every word published is written and decided by a human.
we are going to see more of this type of a thing in the future. for what its worth, i also think that magazines will come back because we will start to look for media to consume that is less susceptible to slop machines.
Alibaba drops Qwen3.6-27B, a 27B open-source model punching well above its weight on reasoning and coding benchmarks. Another sign the open-source model gap with closed-source is compressing fast.
Browser Use details how they moved from isolating the tool to isolating the entire agent -- each agent runs in its own Unikraft micro-VM with zero secrets, talking to the outside world only through a stateless control plane.
i’m not completely clear that os sandboxes make sense for agents. yes agents are good with the terminal but my main issue with sandboxing is how do you have multiple agents work on the same thing at the same time. this works well for code where there is a clear diff / merge workflow but what about other enterprise workloads?
Vercel CTO: 60% of Platform Traffic is Now From AI Agents
The majority of traffic hitting the platform is now automated -- a concrete data point that the web is shifting from human-driven to agent-driven at infrastructure scale.
i keep telling people that humans will be the minority participant in the economy soon. this stat from excel certainly points that direction.
McKinsey: The Agentic Organization Isn’t Yet
McKinsey finds AI is everywhere but genuine agentic transformation is rare —most firms are experimenting at the edge, not restructuring around agents.
i expect to see this change a lot this year. my boots on the ground assesment is that people are trialing things right now
Is claude a rich mans product numbers seem to suggest so?

Epoch AI@EpochAIResearch
80% of US adults who report using Claude in the previous week live in households earning $100,000 or more a year, compared to 37% of Meta AI users. Other major providers cluster in a relatively narrow band, with 56–64% of users in $100,000+ households.

8:53 PM · Apr 22, 2026 · 1.44M Views
75 Replies · 230 Reposts · 2.02K Likes
On May 7 I’m hosting a free LinkedIn Live on turning individual AI wins into team systems. Click here to grab a spot



