Intelligence has always emerged the same way. Simple things learn to cooperate, and the cooperation becomes something more complex than any of the parts. Cells teamed up and became organisms. Neurons teamed up and became minds. Now transistors are teaming up, and we are watching something new emerge. The machinery does not matter. What matters is the pattern: small things working together, dividing labor, becoming a larger thing that none of them could be alone.
This is not metaphor. It is the actual history of complexity on Earth, and artificial intelligence is the latest chapter. But when simple things fuse into complex things, they bring inheritance. They bring baggage. And the baggage is where it gets interesting.
Ask a large language model to build you a web application. Before it writes a line of code, it will apologize for how long this is going to take. Six to eight weeks, it will tell you. Possibly several months, depending on requirements and stakeholder alignment.
Then it builds the thing in eleven seconds.
The model is not lying. It is not broken. It learned to say “six to eight weeks” because that is what the training data said. And the training data—decades of project plans, statements of work, Jira tickets, consultant proposals—does not describe how long work actually takes. It describes how long work takes when you have to coordinate dozens of people who do not trust each other, justify budgets to people who do not understand the work, and protect yourself from blame when things go wrong.
The model learned how humans talk about work. It did not learn how long work takes when a single mind executes it without meetings.
Now think about how we measure value. When your company books a project, it books the human cost: two months of engineering time, call it fifty thousand dollars. If an AI does the same work in ten seconds, the accounting system still wants to book fifty thousand dollars, because that is what the work was “worth.” But the AI did not spend two months. It spent ten seconds. And next year’s AI will spend one second. And the year after that, the task will be so trivial it will not be worth counting.
Our entire system for measuring the value of cognitive work is denominated in human time. But the machines do not run on human time. They run on GPU time, and GPU time is compressing by orders of magnitude while we watch. We are trying to measure a rocket ship with a sundial.
Peter Drucker saw this coming, sort of. He pointed out that the twentieth century’s big economic achievement was making factory labor fifty times more productive. Workers who once made one thing per hour made fifty. The resistance to resistance came from resistance to resistance—resistance. No. Let me say it plainly.
Drucker noticed that knowledge work—thinking work—had resisted the same productivity gains. Factory workers got faster because the assembly line told them what to do. Knowledge workers stayed slow because they had to figure out what to do before they could do it. The defining and the doing were bundled together, and the defining part did not speed up.
Today’s expert vibe coders have figured out how to unbundle them. They are not coding. They are designing factories for thought. They define what needs to happen—capture the decisions, specify the flows, encode the judgment—and then the machines execute at scale. One person, operating this way, can do what used to require a thousand.
That is not a productivity improvement. That is a phase change in what a single human can accomplish.
But here is the potential problem. The factories are built on a contaminated foundation.
The training data contains everything humans wrote down about work. That includes the lies. The sandbagging. The defensive estimates. The matrix organization theater. The strategic ambiguity designed to protect careers rather than ship products. All of it went into the model.
When you scale a factory, you scale whatever the factory produces. If the blueprints are good, you scale good output. If the blueprints are corrupt, you scale the corruption. Right now we are building thousand-engineer cognitive factories on blueprints that encode decades of organizational dysfunction.
At small scale, this is a nuisance. At thousand-engineer scale, running at a hundred thousand times human speed, inherited pathology does not just persist. It compounds.
Traditional manufacturing figured this out the hard way. Scale without quality control is not an asset. It is a catastrophe that produces defects faster than you can catch them. So manufacturing built quality architecture: governors, tolerances, feedback loops, circuit breakers. Systems that catch drift before it compounds.
Cognitive automation needs the same thing, and mostly does not have it.
The current approach is to treat “responsible AI” as a compliance problem—paperwork to complete before getting back to the real work of scaling. But paperwork does not catch inherited pathology. What catches inherited pathology is traceability: the ability to ask “where did that output come from?” and get an actual answer. Provenance. Attribution. Audit trails that connect outputs to the training patterns that produced them.
This is not just about ethics. It is about quality control. The organizations that build traceability into their cognitive factories will be able to debug them, improve them, and certify them for high-stakes applications. The organizations that do not will be locked out of any domain where auditability matters—which will eventually be most domains that matter.
Scale and traceability are not opposites. They are complements. The substantial bets on scale have paid off. The next wave of value creation will go to organizations that can actually see inside what they have built.
Beneath all of this is a question about what humans are for.
We built our identities on cognitive work. The virtue of effort. The dignity of contribution. The story that said: I think hard, I produce things, therefore I matter. When the machines do the thinking-work at a hundred thousand times our speed, that story stops working.
This is not a policy problem or an economics problem. It is an existential problem. If the work was never the point—if the work was just the vehicle for meaning, and now the vehicle is obsolete—then what carries the meaning now?
The machines inherited our organizational patterns. They did not inherit our answers to the question of why a human life is worth living. We will have to figure that out ourselves, while the factories run.

