What happens when you saturate the learning curve?
How have OpenAI and Anthropic, two startups, stayed ahead of all the tech giants? They committed early to the scaling hypothesis, yes, but after that initial lead, after everyone knew scaling was the way, why couldn’t other companies with infinite resources just double their effort and overtake them?
For years, I thought about Moore’s Law and wondered how progress was so fast. But eventually, I realized the better question was: why was progress so slow? Over the last 60 years, chip fabrication and design have changed many times over. The total funding increased by many orders of magnitude. Genuises and their startups came and went. World politics and economics have radically changed. And yet nothing increased the average rate of improvement.
From Gordon E. Moore’s The Future of Integrated Electronics, 1965
I believe the reason is that the “learning rate” of technology has a maximum speed, a physical speed limit, a limit of “the whole system” of people and machines and the world, and achieving that maximum speed is easier than you’d think. And once you have enough money and enough talent, you saturate the learning rate: more simply doesn’t help.
OpenAI and Anthropic got off to an early start, yes, but then each had enough funding and talent to hit the maximum speed on its own. Literally no one can surpass them technologically, unless they blunder, because no one can go faster than that limit.
The entire economy, as one big physical system, may also have a speed limit on economic growth, and we’ve been pegged at that limit for 50 years. Yes, in previous eras, growth was faster, but in the IT age, it hasn’t budged.
A corollary here is that simply sustaining exponential growth requires increasingly Herculean effort. As exotic and new as AI is, it might simply be what keeps us at the speed limit, and not something that exceeds it. It’s an open question: will AI usher in dramatically faster growth, or “just” keep the current exponential growth pegged?