Picture two founders starting a company today.
One scales by hiring. Each person sees a small part of the company and develops expertise that stays in their head. Everyone has a different perspective on what’s the right thing to do and it takes a while for them to reach a consensus. Over time, the company gets bigger. No single person holds the full picture and coordination overhead builds up.
The other founder does something different. They build an AI-native company. They structure every decision and action as the output of a superintelligent model and log every outcome. Although the setup took more effort, the model has complete context of the business and learns to run it over time.
After three years, the first company has a hundred people who each hold a piece of the puzzle. Their shared understanding is scattered across Slack threads, undocumented meetings, and the minds of employees who might leave tomorrow. The second company, however, is a unified operating system that has compounded every outcome from every decision for a thousand days. Always getting smarter. Always getting faster. Always getting better.
One of these companies ends up eating the other.
Companies are information processors
A company is just a group of actors working together as one legal entity. The inhabitants transform data into reports, coordinate work, and decide what action to take. Remove the internal relationships and it just comes down to information processing. Signals come in, decisions are made, and actions go out.
Much of the work in running a company is informational, whether it’s engineering, product, strategy, operations, finance, marketing, sales, legal, hiring, or logistics. But processing information is exactly what models are best at.
The inevitable conclusion as AI capability improves exponentially is that one day the model doesn’t just assist with these functions but becomes the company itself. A model that is both the orchestrator and the actor.
When that day comes, the company becomes the harness and the model becomes the company.
The last mile
The world is already moving in this direction. Foundation models can work for hours and soon days. Agents are managing logistics for supply chains and running vending machine businesses. Researchers are making steady progress on continual learning. So why don’t we have this today?
There are three barriers: one scientific, one engineering, and one sociological.
The scientific barrier is that models don’t yet learn from their experience the way we do. Until recently, most models learned by imitating humans (see imitation learning). But you can’t learn to run a company by watching others do it. You need to learn from your own mistakes. Recent advances in reinforcement learning are closing this gap, and have produced a new generation of agents. AI can finally take actions reliably. But unlike us, AI still separates learning from acting. Researchers must train the model in a simulated environment before deploying it into production. The model is frozen, and its core capabilities and knowledge cannot adapt to what it sees. As a result, an AI agent today is as capable on day one thousand as it was on day one. To solve this problem, we need new research to unlock continual learning.
The engineering barrier is building a harness that captures every action and outcome of a company as training signal. A company is a complex distributed system. Dozens of processes run in parallel. Imagine logging every customer interaction, every operational decision, and every sale as a live feedback loop that shapes the model’s behavior in real time. We don’t have this infrastructure yet, but it’s a solvable engineering problem. In fact, this kind of harness is being built as we speak (see OpenAI Frontier).
The last barrier, the sociological one, is different. It may not be solvable at all, at least by incumbent companies.
No one will build this willingly
Let me tell you a secret. Companies are not actually optimized for shareholder value. They are optimized for the people working there. Careers are built on judgement, taste, and the accumulation of decision-making power.
Now imagine telling these people that the optimal future for their company is one that doesn’t include them.
No one will build this willingly. Managers may try their best to automate the work beneath them, but they will resist replacing themselves. The incentives are perfectly misaligned, a classic principal-agent problem. In fact, we’ve seen this before in finance. Portfolio managers are careful to keep their decision-making processes to themselves. Quants are rumored to keep their code undocumented on purpose. Making your expertise legible in the age of learning machines is simply self-obsolescence. Incumbents will try, but they will hit a ceiling set by the people who have the most to lose. No one will willingly fire themselves, no matter how sincere they may sound.
Starting from scratch
The science is close. The engineering is solvable. The only barrier left is a deeply human one.
The first autonomous companies won’t be built by the incumbents who have the most to gain. They’ll be built by hyperscaling startups with no one to replace.