Jevons Paradox, AI, and Humanity’s Relevance

10 min read Original article ↗

There are AI bros and there are finance bros. Other than having huge bro-crushes on each other, they have something in common these past many months:

They love talking about something called the Jevons Paradox.

Fucking Love It.

In articles, blog posts, social media posts, podcasts, and elsewhere, it’s Jevons Paradox this, Jevons Paradox that.

With little variability, they seem to be saying pretty much the same thing over the past six months plus. Their thesis statement goes along these lines:

The Jevons Paradox dictates that, as a resource becomes easier to access because of technological advances, consumption of that resource rises instead of falling. Ergo, AI will not put humanity out of work – because demand will increase, keeping both AI and humans relevant.

This argument always seemed a bit off to me. So I went to the source, reading some of the writings of William Stanley Jevons.

It now seems to me that the bros never bothered to fully understand the context of the Jevons Paradox – and, therefore, that they do not appreciate what the concept truly means.

(Perhaps they had AI summarize it for them.)

In 1865, English economist William Stanley Jevons published a book called The Coal Question. In it, he explored contemporary views on coal – James Watt’s invention of a more efficient coal-reliant steam engine decades earlier having supercharged the English economy.

Many had taken to arguing that improved technology would lead to less coal use. Jevons pointed out that the case was immediately otherwise. History and data showed that coal consumption increased as technology became more efficient – not the other way around – specifically because coal became cheaper and easier to use.

Jevons-Paradox-obsessed AI boosters love to tout this point in applying it to AI, but they seem to ignore the concept’s ramifications.

Jevons pointed out that the spike in coal consumption presented a sustainability issue for coal as a resource (yes, people were concerned about that even in the 19th century). Jevons explicitly stopped short of arguing that England would literally run out of coal. (“[O]ur mines are literally inexhaustible. We cannot get to the bottom of them; and though we may some day have to pay dear for fuel, it will never be positively wanting.”) Neither were Jevons’s concerns born of environmentalism. Rather, he viewed this as an issue of economic sustainability: Jevons argued that the cost of coal extraction would rise to keep up with demand – eventually to an economically prohibitive level.

To wit: The country wouldn’t run out of coal. It would simply stop digging it up and making it widely available. This would lead, Jevons wrote, to “the end of the present progressive condition of the kingdom.”

Share

Share B2BS

At the time, England’s use of coal surpassed that of the entire rest of the world – assuring the British Empire’s global economic dominance. But coal was a global resource, and England did not have a monopoly on technology.

“[I]t is impossible we should long maintain so singular a position,” wrote Jevons. “[N]ot only must we meet some limit within our own country, but we must witness the coal produce of other countries approximating to our own, and ultimately passing it.”

With an aging industrial base, England had tremendous technical debt. Meanwhile, other nations, like Germany and the US, managed to scale up their industrial technology rapidly – to the point that coal partially obsolesced. The US was (and is) an especially vast and resource-rich nation – particularly in oil; as technology further advanced, oil became the dominant energy source.

All too soon, Jevons’s worries would come to pass. England’s economic power slipped. The US would surpass England in economic power – to the point that the latter would wind up massively indebted to the former thanks to two world wars.

The same technological advances that had built the British Empire’s vast economic success – through increased accessibility to and demand for coal – also led to the British Empire’s downfall because of the strain on resources these advances created.

Because the fundamental truism of economics is that competition and alternatives exist.

That’s the culmination of the paradox – that greater accessibility to the resource leads to de facto scarcity.

The one thing the bros are right about is that we are seeing the Jevons Paradox begin to play out – but it’s not happening in the way they have been describing.

While AI-delivered insights – along with AI-created documents and “art” – are readily accessible and relatively cheap, the cost of extracting them is skyrocketing.

Even before AI fervor reached its current fevered pitch, data-center costs had been rising because of COVID-related economic issues like supply-chain challenges and construction costs. Today, data centers are in piping-hot demand. At the same time, they are becoming yet more difficult and expensive to build and maintain thanks to, inter alia, the high price of natural gas (care of the Strait of Hormuz crisis), insufficient infrastructure, and other resource constraints.

Aso Ed Zitron writes in his own newsletter:

“We’ve already got a shortage in the electrical grade steel and transformers required to expand America’s (and the world’s) power grid, we’ve already got a shortage of skilled labor required to build that power (and data centers in general), and we’re moving massive amounts of heavy shit around a large patch of land using thousands of people, which will cost a lot of gas.”

(Emphasis in original.)

Moreover, today’s AI hyperscale data centers aren’t the data centers of yesteryear. They use many times more electricity than conventional data centers do.

When grid connectivity doesn’t provide enough power, and as data centers are increasingly compelled to go off-grid, data centers turn to high-pollution generators and even jet-engine gas turbines(!). For these and other reasons, US data centers collectively cause billions of dollars in environmental damage each year and health problems for nearby residents.

This has made a lot of everyday people none too pleased with data centers. Voters and advocacy groups have been increasingly mobilizing against data centers. Several states (the latest being Maine) have introduced legislation to put a moratorium on data-center development; so too have Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez at the federal level.

As a result, building data centers requires not only material resources but also a lot of political and social capital. (Ironically, data-center developers may find themselves stunted in growing that capital because the vast majority of PR people outsource their work to AI.)

To speak more on material resources, data centers also face a hardware challenge. AI’s gluttony for more memory combined with society’s gluttony for more AI has caused an unprecedented memory-chip shortage. This shortage is slated to last through at least 2027 (with analysts predicting that only 60% of demand in high-bandwidth memory products will be met by EOY 2027) – and possibly into 2030.

Meanwhile, a looming strike of Samsung workers set to begin on May 21, 2026, is likely to exacerbate costs and shortages.

The chip shortage is having downstream effects on the rest of the world, all the way through to the consumer market. Manufacturers are shipping lower-performing consumer tech, saving as much RAM as possible for Big Tech and enterprise AI needs.

No doubt noticing that consumers and laborers – pesky little hoi polloi human beings who whine about trivialities like aquifers, air quality, noise pollution, and having enough food to eat – are getting in the way of the AI monolith, OpenAI CEO Sam Altman has proposed making them fundamentally reliant on AI. Specifically, Altman has gone full Marie “let them eat cake” Antoinette and proposed “universal basic compute” – allotting to individuals x number of AI tokens each month as a form of centrally distributed income.

(…which, it would coincidentally seem, would make Sam Altman and his peers in direct control of a global currency. What a fun coincidence that I’m confident is completely unrelated to why Altman thinks that’s a good idea.)

My point here is that, as AI growth and demand have exploded, the costs of “extracting” AI insights and AI deliverables have risen across the board. AI is now in competition with other interests for resources.

At scale, assuming democratic free markets continue, this begins to seem fruitless. Many – including the CEO of IBM – have called out the AI emperor for having no clothes, pointing out that the economic math does not work for AI’s scalability.

At least, not without major policy shifts to prop up AI – free market be damned.

True, the Trump Administration has been quite friendly to AI-booster interests as was the Biden Administration. But while the US is playing a game of “go all in on AI to beat the Chinese,” neither country has a monopoly or oligopoly on either technology or insights.

In other words, as AI butts up against the constraints of capital and resources, the greatest competitive threat to AI is human innovation.

This sounds trite, but those driving the AI revolution have as much as admitted this by action if not by word. The AI industry is fundamentally powered by human innovation.

The biggest LLMs could not exist the way we know them if not for being allowed to get away with large-scale scraping of copyrighted content. Meanwhile, AI startups pay human creators and specialists to train AI (i.e., to replace them).

Still, it’s not enough. Consider that, last year, AI giants began hiring content specialists at salaries far above market; OpenAI posted a position for a Content Strategist for $310k–393k, while Anthropic posted for a Content Marketing Lead for $240k–300k.

In effect, while other companies are racing to replace content creators and ContentOps specialists with AI, two of the biggest AI juggernauts in the Western world publicly admitted that their own technology paled in comparison to what a human can do.

The same is true of other areas. A recently published paper (by the former CEO of Infosys and his son, no less) demonstrated mathematically that LLM-based agents are fundamentally limited and cannot perform complex tasks.

AI’s reliance on humans would seem at first to prove the Jevons-related point of the AI bros – excepting that AI does not (yet) have a monopoly on humanity. Just as humans brought LLMs and the current AI boom into existence, humans could relegate AI to the realm of curiosities – if not the junk pile – through their own innovation, policies, and competing demands.

Humans are indeed still relevant, but only in spite of AI – not because of it. AI needs humans to keep itself going, but unsustainably so. In the long run, the only way for AI to “win” would be through what antitrust activists might call “anti-competitive behaviors” that would nullify humanity’s relevance. The two cannot coexist at arm’s length.

(Lest you find this to be empty doomerism, Anthropic’s own research demonstrates these points.)

In pulling the thread of the bros’ Jevons Paradox arguments to this extent, we highlight the disturbing foundation underlying their position: humans, too, are resources that power AI.

The Jevons Paradox (the real one) is about constraints, not infinite expansion. As such, it dictates that the resource eventually competes with itself.

That is what humanity has done to itself with AI, and that is what AI is doing with humanity.

Which will obsolesce first? ,

Leave a comment

Discussion about this post

Ready for more?