What agriculture and car manufacturing can teach us about the use of generative AI in software engineering
It’s not often that an article about AI coding tools takes us back a few centuries. Come with me on a trip down memory lane to revisit a couple of lessons other industries learned long ago.
In the 1840s, the German chemist Justus von Liebig popularized an idea about plant growth, first articulated by agronomist Carl Sprengel in 1828. Plants need many things to grow: water, sunlight, nutrients, good soil. But their growth is not determined by the sum of those inputs. It is constrained by whichever one is in shortest supply.
This idea later became famous through a simple illustration known as Liebig’s barrel: a wooden barrel made of staves of uneven height. No matter how tall most of the staves are, the barrel can only hold water up to the height of the shortest one. This illustrated that the capacity of a system is determined by its tightest constraint.
Agriculture spent millennia bumping into this lesson. Farmers improved tools. They organized labor more efficiently. They expanded cultivated land. They experimented with irrigation, crop rotation, and manure. Yet crop yields repeatedly hit ceilings that seemed difficult to explain.
The invisible limit, in many cases, was nitrogen. Plants need nitrogen to grow, but they cannot directly use the nitrogen that makes up most of our atmosphere. Soil nitrogen is a scarce resource, and when fields were repeatedly cultivated, they often lost the fertility needed to sustain higher yields.
Then came one of the most consequential chemical breakthroughs of the twentieth century. In 1909, Fritz Haber, working with Robert Le Rossignol, demonstrated high‑pressure ammonia synthesis from nitrogen and hydrogen. Carl Bosch at Badische Anilin‑ und Soda‑Fabrik (BASF) then solved the engineering challenges required to scale it industrially. By 1913, the first large‑scale Haber–Bosch plant at Oppau was operating.
Press enter or click to view image in full size
Synthetic fertilizer transformed agriculture. Crop yields rose dramatically.
But agriculture did not suddenly become unlimited. Instead, other limits became visible. Water availability. Other nutrients like phosphorus and potassium. Pest pressure. Soil degradation. Storage and transport.
Once nitrogen stopped being the bottleneck, something else took its place. Systems are governed by whichever constraint is currently narrowest.
Let’s fast-forward a century and jump to another continent. When people mention the Toyota Production System, they often frame it as a story about efficiency: eliminating waste, producing faster, optimizing manufacturing.
But Toyota’s real insight was not about speed. It was about flow, and how optimizing for speed alone can make systems worse.
During the 1950s and 1960s, engineers like Taiichi Ohno and leaders like Eiji Toyoda developed a production philosophy that challenged a very common industrial instinct: the belief that every station in a factory should run as fast as possible. That instinct sounds sensible. In practice, it often harms the system.
If one workstation produces faster than the next workstation can absorb, the extra output does not become progress. It becomes inventory. Inventory piles up between steps, consumes space, hides defects, and delays feedback. It creates the illusion of productivity while actually making the system less responsive.
Press enter or click to view image in full size
Toyota addressed this with a pull system called kanban (if that term sounds familiar, it’s not a coincidence). Each container of parts carried a kanban card. When a downstream station used the last item, it sent the card upstream as a signal to replenish only that amount. No signal meant no production. Limiting the number of cards in circulation set explicit work‑in‑process limits, shortened queues, and surfaced problems faster.
The goal was not to maximize activity at each step. The goal was to optimize throughput, the flow of work through the entire system.
Agriculture learned that systems move at the pace of their narrowest constraint. Car manufacturing learned that speeding up one part of a system can make the whole thing worse if the rest of the system cannot keep up.
The software industry is now learning both lessons at once.
For decades, the bottleneck really was writing code
For most of software’s history, writing code was genuinely expensive. It required specialized expertise, sustained concentration, and a large amount of manual effort.
The first big gains came from abstraction in the 1950s. Higher-level languages let engineers describe what they wanted without hand-encoding every machine instruction, and compilers automated the translation. That didn’t just make development faster, it made whole classes of work possible for more people.
As the field matured over the following decades, tooling kept removing mechanical friction. Editors, debuggers, and build systems improved, while languages became more expressive and idiomatic. Libraries and frameworks steadily absorbed common patterns so teams didn’t have to reinvent the same solutions over and over.
Then came another leap: in the early 2000s we introduced tools that made codebases easier to navigate and less risky to modify. IDE features like symbol search, automated refactoring, and static analysis reduced the cost of understanding code and increased confidence when changing it.
Coordination costs dropped, too. Version control evolved into a core part of how teams work, making branching, reviewing, merging, and collaboration far less painful than earlier approaches.
Finally, the industry invested heavily in lowering the cost of verification and delivery. Automated tests, continuous integration, deployment automation, and observability tightened feedback loops and reduced the fear of shipping.
Looked at historically, the pattern is clear. Software engineering has spent the past roughly seventy years trying to make it cheaper to produce and change code. And those efforts were not misguided. They were attacking a real constraint. And it worked. Software became dramatically easier to build than it had been in the early decades of computing.
Get Bence A. Tóth’s stories in your inbox
Join Medium for free to get updates from this writer.
But even after all that progress, writing and changing code still remained the dominant bottleneck in the software industry.
Press enter or click to view image in full size
That is, until generative AI tools walked in the door. They don’t just continue the curve a little further, they finally bend it far enough to shift what limits progress.
For the first time in the history of software, producing code stopped being the narrowest constraint.
Engineers can now generate working drafts of migrations, tests, components, complex database queries, API clients, utility functions, documentation, and refactors in seconds. Boilerplate and exploration become increasingly cheaper. The cost of trying multiple implementations drops dramatically.
Code becomes abundant. And when one constraint disappears, the next one quickly becomes visible.
The new bottleneck: absorption capacity
Absorption is what that turns a diff into dependable value: deciding what to build, fitting it into the system, proving it behaves, and understanding the value it delivers.
Generative AI accelerates the production of implementation. It does not, by itself, accelerate the production of clarity.
Pull requests arrive faster than human-driven review can stay meaningful. Multiple versions of the same idea appear because generation is quick but alignment is not. Code looks correct within a small context while the architecture slowly degrades. Test coverage increases while the trust in it erodes. Experiments multiply while product decisions lag. And the codebase expands faster than shared understanding.
And more code starts to resemble Toyota’s worst kind of waste. If it can’t flow through the rest of the system, it’s not throughput. It’s inventory.
Motion is not progress. Progress is change that the organization can successfully absorb.
How we increase absorption capacity
The solution obviously isn’t to slow teams down. It’s to find new ways to let them move faster.
If generative AI makes production cheaper, leadership leverage shifts toward increasing absorption capacity and optimizing flow. In practice, that means redesigning our systems so that rapid generation gets converted into reliable value.
1) Make problem framing part of the work, not a preamble
When ambiguity can generate code at scale, clarity becomes a first-order production input.
That has an organizational implication: the work of writing problem statements and PRDs can’t live entirely on the product side of the wall anymore. Engineers are downstream of the ambiguity, and ironically, they’re also the ones who can turn a vague prompt into a plausible implementation that gets accepted before anyone notices that the question was underspecified, and ship a fantastic solution to the wrong problem.
Treat crisp problem framing, explicit acceptance criteria, and clear non-goals as a shared deliverable between product and engineering, not a handoff.
2) Lower the cost of confidence
If generation is cheap and verification is expensive, the highest-leverage investment is obvious: make verification cheaper.
This isn’t just “write more tests”. It’s building fast feedback loops that cover both correctness and outcomes. Strong CI signals that are impossible to ignore. Static analysis and type checks that catch entire classes of mistakes immediately. Security and dependency scans that run by default. Observability as a standard output of change, not an afterthought. Feature flags, staged rollouts, guardrail metrics. Product-facing feedback that closes the loop after deploy, and quick experiments that tell you whether customers are better off.
Reduce the cost of answering the only questions that matter: “is this safe to merge and ship?” and “did it actually improve anything for the customer?”
3) Treat architecture and conventions as scaffolding for AI
AI will scale whatever structures you already have. If your system is legible, it accelerates good patterns. If it’s ambiguous, it accelerates ambiguity — fast, and with a very convincing tone of voice.
The practical implication is that architecture can’t just be a diagram people rediscover during incidents. It has to exist as a set of operational constraints in the everyday developer workflow. The model should be able to infer the right shape of change from the codebase itself, and the humans should be able to verify that shape quickly.
Make the right thing easy to discover and hard to accidentally violate: clear service and module boundaries, consistent naming, and a small number of blessed ways to do common tasks. Back that up with templates and examples that are close to production.
Also, document what must remain true. Invariants, contracts, and “we do it this way because…” notes are exactly the context both humans and AI lack when they make locally reasonable changes that cause drift at a broader scale. Keep those decisions lightweight and available to your AI toolchain — short ADRs, code comments where the constraints live, and guardrails enforced in CI — so intent survives longer than a sprint and doesn’t depend on people’s memory. Your conventions keep the system steerable at higher speeds.
4) Measure throughput instead of output
When writing code becomes fast and cheap, output metrics become actively misleading. You may see them increase sharply while real throughput stays flat, or even declines.
Be especially suspicious of vanity metrics like lines of code, commit counts, PRs opened, story points burned, tickets started, or tokens used. In an AI-accelerated workflow, these are mostly measures of activity, and activity is exactly what becomes cheap.
Prefer measures that reflect absorption and flow: lead times from ideation to ready, and from ready to shipped, PR queue times, change failure rates, rollback rates, and operational load like frequency of pages and incidents.
Generative AI can make teams dramatically more capable. But the advantage doesn’t go to whoever generates the most code. It goes to whoever can turn abundant code into coherent systems and reliable delivery of value.
For many engineering leaders, that is the new challenge: not helping teams produce more code, but building an engineering culture that can absorb more meaningful change.