Before straying to the theory dark side, I spent some time in college as an Economics major; I’ve always been strangely charmed by a silly joke I first heard back then. It goes like this: an engineer, chemist, and economist are all stranded on a deserted island. Oddly, this island lacks any natural food resources, but it does have a massive cache of canned food. The question for our trio: how to open the cans so they can survive? The answers:
Engineer: cut down trees to build a trebuchet that will launch the cans onto the rocks on the beach, where the force of the impact will break them open.
Chemist: use the wood to build a fire and heat the cans until they burst.
Economist: assume a can opener.
On one level, the joke comes at the expense of the unworldly economist, so lost in formal models that he cannot see that out in the world one cannot just summon into being whatever is needed to make the model work. On another level, the joke seems to confirm the worldview of such economists by reiterating yet another Robinson Crusoe narrative, wherein we have fully social creatures (representing disciplines/professions) somehow living in a “natural” environment – filled with very “unnatural” (i.e., humanly produced) resources. In other words, the problem is not that you can’t assume a can opener; it’s that you can’t assume the canned goods in the first place.
I’m reminded of this joke whenever I read the missives of our would-be “AI” overlords.
Here’s Sam Altman, 2 months ago:
We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.
And Mark Zuckerberg just a couple of weeks back:
Over the last few months we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable. Developing superintelligence is now in sight.
One of the most brilliant rhetorical moves of the AI advocates has been to both change terms and also add terms, in a way that normalizes and naturalizes a language that implies achievement of prior stated aims. Let me explain by way of the terms.
Twenty years ago “artificial intelligence” was itself the stated goal: to create a machine with true intelligence on the level of a human being. Twenty years ago “AI” was something we did not have, but hoped to have. Then, perhaps a decade ago, but speeding up massively in the recent period of LLM dominance, “AI” became a term one simply used to describe current technology, even though that technology has not achieved the previously stated goal. The stated goal shifted to “AGI,” artificial general intelligence, but in talking about “AGI” the AI advocate simply posited the existence of AI. Indeed, large language models were re-badged as AI, despite having been, quite recently, previously understood as nothing of the sort.
The next point in the future horizon of glorious superabundance is named by both Altman and Zuckerberg: superintelligence. What does this mean? Absolutely no one knows. It’s just a made up term to describe a future where machines are smarter than humans and can therefore gain in intelligence exponentially; AI and AGI and previously described the very same future. But by positing superintelligence as the future goal, and in repeating over and over again that it will absolutely, positively be here almost immediately, Altman and Zuckerberg can then just take it as read that what we are dealing with today is “AI”,” artificial intelligence.
To be clear, AI has never been achieved. Rather, the modest achievements of large language models have been renamed under the utterly inappropriate term, AI.1
I spend a lot of my time reading about LLMs and the politics of AI, phenomena that I think are likely to be the very fulcrum points for both national and global politics and political economy for much of the near future. But I don’t write a lot about these things, mainly because I think I have less marginal value to add, given the great work being done in these areas.
For example, Henry Farrell has an entire newsletter devoted to this area of inquiry and more than two years ago (June 2023), long before others began to see it, he offered a sharp vision of where this all was headed:
Large Language Models (LLMs) are about to transform the Internet. They batten on vast corpuses of textual data, which they turn into weighted vectors, and then use to predict and generate text. That allows them – for the moment – to spew out content that more or less approximates some common denominator of what people know. But this doesn’t mean that algorithms are more valuable than human knowledge. Plausibly, it is exactly the opposite.
LLMs’ value, which is real, involves summarizing and organizing human intelligence, rather than substituting for it. Their dependence on organized human intelligence is only going to grow. LLMs produce text as well as analyzing it. As they begin to feed on text that they and their siblings have produced, their informational quality will rapidly degenerate. This means that the Internet as a whole is likely to become considerably less valuable over the next couple of years (emphasis added).2
We must read Altman and Zuckerberg’s latest proclamations in the context of Henry’s earlier prediction. They are positing unlimited future value (i.e., profits) in the context of declining present profits, and in the context of burning billions of dollars per month training LLMs.
On the one hand, this is what people who run companies, especially internet companies, do: lay out a bold vision for exponential future profits to entice investors to give them money today. I’m not the first to point out that Altman likes to publish his little philosophical gems right before a new round of fundraising.
On the other hand, there is something distinct and distinctly problematic about tethering future profits to future superintelligence. It’s one thing to say: we will sell X units of this product in Q3 of 2027, and that will make us a shit-ton of money. It’s quite another to say this:
From here on, the tools we have already built will help us find further scientific insights and aid us in creating better AI systems. … this is a larval version of recursive self-improvement.
There are other self-reinforcing loops at play. The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off.
If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different. (emphasis added)
The key shift here happens in the middle paragraph, at the moment we move from the sentence I’ve italicized to the one I’ve bolded. The italicized sentence is pure sleight of hand: Altman is saying that because his spending is growing exponentially now, this entails exponential profit growth in the future. This is not at all how anything works – certainly not how capitalism works. Spending too much does not guarantee capitalist success (i.e., selling products at profit), but rather tends to undermine it.
The key is that the italicized sentence, inane though may be, remains at least somewhat attached to economic reality, to the possibility of future profit. But the bold sentence shifts entirely out of the domain of capitalist economic logic, to the terrain of science fiction visions of datacenters building datacenters.3 This is the move to the singularity, where the recursion of intelligent machines allows them to almost immediately become super intelligent machines, by teaching themselves to be even smarter.
The entire third paragraph remains on this same sci-fi terrain, where the discussion is about “progress” for all of society, not profit for AI companies. Indeed, if you’ve read Kim Stanley Robinson’s Mars trilogy you can easily fill out Altman’s vision: machines repairing and building machines, making it possible to transform a natural environment from uninhabitable to habitable, and thus also to create ungodly levels of “abundance.” But Altman is leaving out the fact that Robinson’s Martians live largely outside the ambit of “global capitalism.” They live their very long lives trying to improve the solar system, while struggling to make and maintain human meaning. They are not capitalists and they don’t generate profit.
Altman, of course, fancies himself a visionary, so even though these missives are disguised investor pitches, the disguise itself matters, and so obviously Altman would never deign to map out a specific plan for the profitability of OpenAI. And he doesn’t have to, because even those who are writing skeptically about “AI” today, seem to have absorbed the first principle: that whoever gets to “AGI” or “superintelligence” first will be rewarded with a big pile of money.
In yesterday’s NY Times, in a piece titled “Silicon Valley Needs to Stop Obsessing Over Superhuman A.I.,” Eric Schmidt and Selina Xu say they are not nearly so sure as Zuckerberg and Altman “how soon artificial general intelligence can be achieved.” But before suggesting the US pivot toward using LLMs productively now, rather than racing to win the future race to “AGI,” Schmidt and Xu outline the logic of the race:
In 1965, Mr. Turing’s colleague I.J. Good described what’s so captivating about the idea of a machine as sophisticated as the human brain. Mr. Good saw that smart machines could recursively self-improve faster than humans could ever catch up, saying, “The first ultraintelligent machine is the last invention that man need ever make.” The invention to end all other inventions. In short, reaching A.G.I. would be the most significant commercial opportunity in history. (emphasis added)
But no, that last sentence does not in any way follow from all that precedes it; it’s the same bogus pivot Altman makes. Capitalists do not make money because they improve society, invent transformative tech, or do good things. They make money because they produce widgets or services that they can sell for profit.
The Good quote gets repeated everywhere, but if my inability to get my hands on the original article is any indication (Johns Hopkins’ library, which subscribes to everything, didn’t seem to have it) it appears unlikely that many people have read the piece with much care. In his conclusion, Good repeats the famous line of logic but with a key twist:
It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make, since it will lead to an “intelligence explosion.” This will transform society in an unimaginable way. (emphasis added)
Yes, that is the right conclusion. If something like “superintelligence” ever did occur, the social (and probably political) revolutions would be so dramatic as to scramble all of the rules and norms and standard practices we know today, and that includes the laws of motion of capitalism.4 The existence of recursively superintelligent machines would undermine the very basis of a capitalist production/exchange.
I say this not as an AI advocate, and in saying it I do not for a moment accept the premise that superintelligence is “undeniable” (Zuckerberg) or that “we are past the event horizon; the takeoff has started” (Altman). Every prediction of future AI – including Good’s, which is now 25 years past due – has proved incorrect, and I’m with Henry in thinking that while LLMs are certainly a useful technology in some contexts, they (a) are not anything close to “intelligent” and (b) the future is likely to see plateauing gains (as we just saw with the catastrophic launch of ChatGPT 5) or model collapse.
I say this to point out that superintelligence, AGI, and all the rest are always a ruse. Zuckerberg wants to hang on to the billions in ad revenue that he’s already captured; Altman wants…I’m not actually sure – perhaps to take that ad revenue from Zuckerberg (?), perhaps to go to Mars with Elon Musk5 (?), perhaps something else? Or maybe the real answer is the answer he himself gives: “I don’t care if we burn $50 billon a year. We are making AGI; it’s gonna be expensive.”
The current political economy of AI rests on positing future value returns (i.e., profit6) that will only be possible after a technological breakthrough of such unprecedented extent that it would entail a transformation of society so radical as to undermine the very conditions of capitalist profit.
