No, AI Is Not Going to Kill Software Engineering

9 min read Original article ↗

There is a particular kind of confidence that only comes from having something to sell. When Dario Amodei tells The Economist that AI will handle most or all of software engineering within six to twelve months, he is not speaking as a disinterested observer of the industry. He is the CEO of Anthropic, a company burning through cash at a remarkable rate while preparing for a public offering, and the valuation of that company depends almost entirely on people believing exactly what he just said. Boris Cherny, the creator of Claude Code, announced on a podcast in February that he has not manually edited a single line of code since November, that coding has been effectively “solved,” and that the job title software engineer will disappear by the end of this year. Cherny also works for Anthropic. An OpenAI researcher offered the pithiest version of the argument: “Programming always sucked. I don’t write code anymore.” Jack Dorsey, having just cut half of Block’s 10,000 employees and watched the stock jump 24%, told the market that within a year most companies would do the same.

These are smart people, and tools like Claude Code are genuinely impressive. Neither of those things is in dispute. What is worth examining is whether the extraordinary claims they are making about those tools reflect the actual capability of the technology, or whether they reflect the ordinary human tendency to overstate the importance of whatever you have spent the last year building.

Before attributing the current wave of tech layoffs to AI, it is worth accounting for the more boring explanation. Block grew from 3,835 employees at the end of 2019 to over 10,000 before its recent cuts, and is now returning to roughly its pre-pandemic size. Bloomberg has already labelled the announcement “AI-washing,” and analysts have noted that Block’s gross margins sit well below comparable payments companies like Visa and Mastercard, which suggests a business that needed to restructure for perfectly ordinary financial reasons and found AI a convenient narrative for the earnings call. Meta, Amazon, Microsoft and Salesforce have all told similar stories over the past two years, and all of them also massively over-hired between 2020 and 2022 when the Federal Reserve was holding rates at 0.25% and burning cash on headcount looked like rational strategy. When rates reached 5.25% by mid-2023, the calculus inverted. The cost of capital is a sufficient explanation for everything that followed, and AI provides the better press release.

None of this means the technology is irrelevant. It means you should apply the same scepticism to a founder’s assessment of his own product that you would apply to a pharmaceutical company’s summary of its own drug trial. The incentive to overstate is large and obvious, and history suggests that people generally respond to large and obvious incentives.

Having said all of that, pretending nothing has changed would be its own kind of dishonesty. The mechanical work of translating a clear specification into working code, the part that required knowing syntax, memorising API documentation and writing boilerplate that every engineer has written a hundred times before, is now largely automatable for a wide class of problems. Microsoft reports roughly 30% of its code is now AI-generated. Anthropic puts its own company-wide figure at between 70% and 90%. A study published in Science last month found that around 29% of Python functions on GitHub are now AI-written. One software engineer built a functioning Slack clone in fourteen days: real-time messaging, channels, threads, file uploads, search, 93 commits, something that would previously have required a small team and several months. Claude Code is, by any honest assessment, extraordinary, and anyone who has used it on a sustained project has experienced the specific vertiginous feeling of watching something that used to take days materialise in hours.

The question is what that actually means for the profession, and here is where the death narrative loses the plot.

A software engineer’s job, stripped to its essence, is to take ambiguous human requirements and convert them into a specification so precise that a machine can execute it without negotiation, without inference, without charitable interpretation. The compiler does not fill in gaps. Every ambiguity in the requirements must be resolved before the machine will cooperate, and resolving those ambiguities, understanding the business well enough to know what the system actually needs to do, is where almost all of the difficulty lives. Writing the code is what you do after you have done that. It is the residue left over once the genuinely hard thinking is complete.

What AI has changed is which ambiguities you need to resolve yourself. For commodity behaviour, authentication flows, pagination, error handling, form validation, Claude draws on a vast statistical model of how software generally works and its defaults are usually correct. You no longer need to specify these things because the answer is close to what most systems do, and Claude knows that. The ambiguities that remain are the ones specific to your situation: the business rule that only makes sense when you understand its regulatory history, the edge case that matters only for your particular user base, the architectural decision that is wrong for everyone except you. These are precisely the cases where a model trained on general software patterns is most likely to produce something plausible-looking and subtly incorrect, where the cost of being wrong is highest, and where the gap between what was generated and what was needed is invisible until something breaks in production.

The teams that have already moved to AI-first development are discovering this concretely. The bottleneck is not a shortage of code; it is a shortage of engineers who can write precise, well-reasoned specifications in the first place. Getting Claude to build the right thing turns out to require exactly the skills that distinguished good engineers from mediocre ones before any of this existed, just expressed at a higher level of abstraction.

Every software engineer operates somewhere on a stack that runs from individual functions at the bottom to business problems at the top. Junior engineers think about whether functions work. Mid-level engineers think about how modules fit together. Senior engineers think about systems and architecture and what happens in production at three in the morning. The best engineers think about users, think about what the business actually needs, and work backwards from there. AI has reached credibly into the lower levels of this stack, and the engineers whose value was concentrated there are facing a genuine structural shift. That is uncomfortable, and it is real.

The engineers operating at the top of the stack are in a different position. A Staff engineer who previously needed five people to implement an architectural vision can now do it with Claude and one other person. The cost of executing good decisions has fallen dramatically, which makes good decision-making more valuable, not less. The profession is not being eradicated; it is being compressed upward, which is painful for some people and a significant power increase for others.

The most acute pain is at the entry level. Graduate hiring at the fifteen largest US tech companies is down 55% since 2019, according to SignalFire, and the junior software engineer market in the UK is running about 40% below its late-2022 levels. The likely future equilibrium looks less like the gold rush of the past decade and perhaps more like medicine: a period of relatively low-paid apprenticeship, earning a much lower salary while developing the organisational knowledge and judgement that cannot be replicated by a model, followed by a path to senior compensation that is increasing rather than decreasing as the premium for genuine expertise rises. This is still much less punishing than a medical residency, where you might invest $300,000 in education before earning $70,000 a year for several years, though the structure is broadly similar.

The inevitable response to this argument is that AI will improve and that the organisational knowledge advantage will eventually evaporate. It is worth being precise about why that is harder than it sounds, because the answer is not a vague appeal to human intuition but a concrete technical constraint.

Large language models currently have a fixed context window of around 200,000 tokens before recall quality degrades significantly, even in the largest current models (yes, 1mn token models exist, but they suffer from substantial recall degradation). Against that, consider what a software engineer who has spent three years at a company actually carries: the full decision history of the codebase, the reason the payments module has that strange architectural quirk, the knowledge of which enterprise client required the edge case that can never be removed, the memory of every production incident and what actually caused it.

Most of this context is tacit, distributed across the minds of everyone who has worked on the system, and has never been written down at all. The current version of the codebase contains the what, but it rarely contains the why. Getting this information into a model at the right moment is not a solved problem. It is exactly where the frontier of AI research is currently concentrated, and progress is slow.

The US Bureau of Labour Statistics published figures in February 2026 showing more than 6.6 million workers employed in tech occupations across the United States, with an unemployment rate of 3.6% against a national rate of 4.3%. Tech workers are less likely to be unemployed than the average American. Job postings are down roughly 35% from their 2022 peak, but 2022 was one of the most anomalous hiring environments the industry has ever seen, and using it as a baseline is like measuring the decline in restaurant attendance by comparing it to the last Saturday before a lockdown.

Cherny is probably right that the title software engineer will eventually give way to something broader. The role that is emerging from AI-augmented development requires understanding users, decomposing complex problems into pieces that an AI can execute faithfully, and reviewing the results with enough technical depth to catch the outputs that are plausible but wrong. That is more demanding than what the traditional software engineering role required of most practitioners, not less. By the logic that AI is ending the profession, the calculator ended mathematics, the spreadsheet ended accounting, and computer-aided design ended engineering. In each case the tools eliminated the laborious mechanical work and the underlying discipline became more important. The abstraction floor rose and the profession rose with it.

The people making the most confident predictions about the death of software engineering are, in almost every case, the people selling the tools that are supposedly causing it. That is not a coincidence you should ignore.

Discussion about this post

Ready for more?