While I spend 99% of my time thinking about hardware, synthetic fuels, and the solar industrial revolution, the progress in AI has not gone unnoticed. I’m writing this post not to share any particular insights but instead to record the questions I think are interesting and how I’m thinking about them as of today.
What will the impact of AGI/ASI be on economic growth?
Dwarkesh Patel, Eli Dourado, Noah Smith, and Tyler Cowen among others have recently discussed potential impacts of AGI ranging from not much (AGI will be slowed down by the same things as everything else) to 50% GDP growth (armies of humanoid robots systematically turning the crust into Capital).
A model I’ve long been interested in is the Corporation as a stand in for AGI. We need some non-human autonomous legal and economic entity. A corporation is just that. The Fortune 500 are already non-human super-intelligence. They operate 24/7/365 according to inscrutable internal logic, routinely execute feats of production unthinkable for any human or other biological organism, often outlive humans, can exist in multiple places at once, etc etc.
To take this analogy further you could even imagine spinning up a few million headless Nevada LLCs, assigning each to some agentic AGI running on the cloud somewhere, and turn them loose. Years ago I registered feralrobotics.com to explore the idea of mass producing ambient solar powered quadcopters with basic sensors and an Internet connection. But as Paul Graham says, the robots live in a data center for efficiency.
There is one other interesting angle to this question when it comes to speculating about economic impact. Let’s imagine a corporation with a bunch of internal AI functionality that is able to perform at a higher level than fully human corporations and as a result compound growth at a higher rate. As an outside observer, how would this differ from a handful of existing extreme outlier companies who can already do things other companies have proven unable to do.
Take for example SpaceX. Over the last 15 years, dozens of competing launch companies have been founded, often by SpaceX veterans who have already learned the hard lessons, often with significantly more money and a friendlier regulatory environment than SpaceX, and they’ve pretty much all failed. SpaceX is, culturally, often a pretty chaotic place to work, and yet they’ve landed the Falcon 9 booster over 400 times.
I’m not saying Elon is ASI (though he’s obviously SI and many peer CEOs attribute his success to this as well as persistence and pain tolerance) but if he was, what difference would it make? Elon’s biographer Isaacson has speculated about succession planning at SpaceX, but maybe that’s what xAI is training Grok to do?
If Grok can simulate Elon, and the rest of the F500 uses Grok to run their organizations, and as a result they achieve SpaceX levels of productivity and innovation, I can’t imagine it wouldn’t at least double growth. But while Tesla and SpaceX have succeeded thus far, it has taken 20+ years. Coordinating large numbers of people has a steep cost in efficiency.
Can someone please write a book that covers the organizational aspects of the Elon Industrial Complex?
To what extent do existing organizational outliers model what ASI can achieve in our economy?
Will ASI be able to help us formalize an ELO score for hardcore technical management?
What are the asymptotic properties of human and machine intelligence as a function of additional compute time?
Humans seem to be much more efficient in training, implying that whatever humans do that is like back propagation it’s at least one complexity class faster than pure back prop. That is, O(N log N) vs O(N^2), or maybe even better.
But when it comes to inference, humans have different modes of thought over different time scales. Most of the time, we make decisions intuitively and almost instantly, with rationalizations arriving a second later. With collaboration or due consideration, or formal reasoning, we can sometimes achieve better decisions. With a pen and paper, we can execute problem solving algorithms in physics or math or poetry, extending the capabilities of our natural hardware to solve tougher problems. And over a long enough time scale, we can generate blog posts and books, both of which can embody compressed intelligence and a much higher signal to noise ratio than an average conversation.
Similarly, LLMs that have exhausted the training set can still achieve better performance by running so-called Chain of Thought algorithms. Still an area of active research, these enable incrementally better results, albeit at the cost of significantly more compute time. Currently, it’s not clear that results continue to improve beyond a fairly basic level, with issues around context and coherence undermining performance.
The question, therefore, is something like “What is the asymptotic performance of human and artificial cognition as a function of flops, time, cycles, watts, or some other extensive measure?” Note here that I’m less interested in an absolute comparison of human and AI intelligence, but rather more interested in the scaling properties with effort.
My working hypothesis is that human cognition improves markedly once pen is put to paper, and in some cases can continue to improve with extended writing (but note many prominent failures). In contrast, the leading LLMs seem to achieve an incremental improvement with CoT and then flatline. For example, for the sorts of questions I’m obsessed with (physics first principles stuff) the LLMs give bad answers in general. With CoT, they take a lot longer to give an answer that is bad in a more obscure way, but the answer is usually not much closer to being correct. Sometimes when it is, it seems that it might have arrived there by exhaustion rather than the machine equivalent of what we would call insight or inspiration.
I wanted more insight into this question, so I asked GPT o3 Deep Research, but it mostly agreed with me.

Wow AI is so bad at physics and what will it take to fix them?
I have a project to convert my IPhO notes into a beautiful and short textbook on the basics of first principles physics.

Most of what we know about physics can be boiled down into about 50 pages of notes. This compression property of the hard sciences seems to leave the LLMs at a profound disadvantage, as their training requires the consumption of reams of material. Yet the actual step by step process of physics problem solving is not that hard. I learned it in high school, at a time when I couldn’t have written a 500 word essay worthy of ChatGPT if my life depended on it.
And yet, the AIs still really suck at physics. What’s it going to take?
Will the economic bottleneck be managers or foot soldiers?
And if so, is the Anthropic Claude model of producing competent software engineers or computer engineers that need skilled human managers better? Or is the hypothetical Grok model of producing clones of Elon who can extend his reach and grasp into other parts of the economy better? Who will commoditize who? Which side of the API will humans end up on?
In the AI as cognitive prosthesis model, what percentage of the gains will accrue to the ends of the bell curve versus the middle?
It seems highly likely to me that AI tools will function as cognitive prostheses, improving productivity, life outcomes, and so on for people anywhere on the spectrum of human capability. But, like the previous question, it’s not clear where the benefits will accrue the best, either in an absolute sense or relative to basic human needs, particularly rivalrous ones.
For example, a world where AI cleanly doubles GDP per capita is a good, if boring, outcome.
A world where 99% of the productivity gains accrue to the 1% most productive people is hardly unlikely – it could look like the previous scenario, except with additional exceptional wealth creation at the very top. Everyone is much better off. Our scientific and technical progress accelerates beyond all previous limits.
Consumption is likely to increase along with productivity growth, creating better lives for billions. But if consumption is proportional to wealth and the good is rivalrous, we could see exceptional productivity in tiny corners of the economy bid up prices for everyone. We’ve already seen this happen in housing, where the good fortune of better health created a politically powerful inverted demography that also emergently crushed the production of sufficient new housing to keep up with that same growth, and as a result, San Francisco among hundreds of other cities has become too unaffordable to function as a real city with a range of different professions, not just elite tier software developers.
If AI 100xes our GDP and 100xes housing prices, or gold prices, or food prices, we could end up in a bizarre situation where everyone is far richer than they were before, and yet some set of necessities are still unaffordably expensive.
No kind of AI-funded UBI can solve this problem. Only the technological and regulatory innovation to ensure that everything people and AIs want can be made in more abundance, and therefore more cheaply, over time. We should start now with legalizing housing construction, obviously!
In the limit, this could be important. It seems likely that AI economic output per watt of power consumed will far surpass even the most productive humans. That AI output per acre of solar photovoltaic land will surpass farming. What, then, will humans eat?
Why is progress so even?
Conventional wisdom on AI is that it’s a tool that helps its own development, and that recursive improvement will soon lead to some kind of take off scenario. Yet that is not what we see in the market, at least as of May 2025. For the last 3 years, multiple leading labs (xAI, OpenAI, Anthropic, Google) have released a bewildering variety of models whose performance has uniformly increased, without any one lab gaining the sort of decisive lead you would expect given the time, money, and general dynamism of the sector.
What this means is that equalizing forces are currently beating inflationary forces.
Equalizing forces include: hard ceilings awaiting conceptual breakthroughs, a lack of data, idea diffusion between labs, collective reticence to expose truly experimental stuff on the frontier, commonality of computing substrate.
Inflationary forces include: recursive improvement, particularly the sort which accrues to the best people and teams in the sector, differential resources in money, compute, and data, which are substantial between the best and worst resourced labs, differential insight, after an obscure breakthrough has been realized.
My best guess is that the space of algorithms is big and our mapping tools are poor, and a few good ideas are very hard to keep secret. But I’m surprised that some labs having literally >10x more training compute makes so little difference.
What is the obsession with AGI UBI?
Oooh, another thing to tax! This is usually discussed on the sidelines, but there is a bewildering variety of baroque taxation schemes discussed around AI, usually with the justification “to ensure the economic benefits are shared by all”. This has never made much sense to me. When I can purchase a battery powered, internet connected supercomputer for <$1000 and put it in my pocket (ie a smart phone) am I not sharing in some unimaginable economic benefit of for-profit tech development by private companies? Apple’s profits are a signal that they are already generating and sharing incredible economic value. Amazon’s enterprize value is a few trillion dollars, but they earn a small margin on a huge range of products and services, whose economic benefits mostly accrue to the customer. That is, Amazon shareholders have made a trillion or so, but their customers derive a trillion or so of economic value every year.
This idea that somehow running a successful company that delivers enough value to generate profit is a moral failing of our society and we should levy punitive taxes on these companies is the sort of insane backwards economic thinking that has pushed most of Europe into terminal economic stasis.
If the privately funded AI companies successfully deliver AGI or ASI, and if we can control potential negative externalities like catastrophic weaponization (which is a separate issue from taxation or UBI) then everyone on Earth will benefit from economic growth no longer being bottlenecked by the cost, care and feeding of rather frail human intelligence. Did we get rich in 2025 because England levied crushing punitive tax on steam engines in 1800? No. We got rich because we bootstrapped the complexity and productivity of our economic system by allowing a diverse array of capital allocators to re-invest the products of their development largely as they saw fit, generating an unimaginable diversity of businesses and products, doubling life expectancy, increasing global GDP by a factor of 100 (!, so far) and really getting a flywheel going on some powerful cumulative engine of wealth. Yes, some people got very very rich but everyone got much richer, especially the billions alive today who would simply be dead from starvation if we hadn’t, for example, commercialized synthetic fertilizers.
This is quite separate from the question of whether UBI is a good idea, whether an AGI-powered economy could be productive enough to fund it, how we would avoid an imbalance between goods and services supply and demand without extremely heavy handed centralized economic controls, and what even is a sensible model for the integration of superhuman intelligences into our political, legal, and economic system anyway? What is the meaning of income tax for entities whose output per unit time is effectively unconstrained, and whose living costs are measured in Watts?
Moravec’s Paradox and robotics fine motor control?
We haven’t yet had the same progress in robotics we’ve had in wordcel LLMs because, we are told, the common crawl contains lots of text and not much in the way of usable specific instructions for robotic actuators. And yet, the internet is stuffed with millions of lifetimes of TV, movies, and YouTube unboxing content. It seems like a small stretch between video multimodal models, inference across 3D from a 2D video, and mocap. There are even already fine tuned transformer-based models that can convert a video to motion capture data. Yes it’s computationally intensive, and yes it can be run autonomously in some big data center on some huge training run. Enough excuses. Time to give the AIs kinesthetic mirror neurons. Formulated as a question: When will existing large scale model training methods generalize sufficiently to be able to run, eg, humanoid robots?