After the AI Crash: A Proposal

5 min read Original article ↗

Early large language models couldn’t handle anything but the most simple math problems. While leading models are now much better, AI firms still have a problem with basic math: Companies are investing trillions of dollars based on tens of billions of dollars in revenues. J.P. Morgan estimates $5 trillion of AI infrastructure investment in the next five years. Yet OpenAI and Anthropic currently earn an annualized $25 billion and $19 billion in revenue, respectively, and “profit” isn’t even part of the conversation.

To put things in perspective, the hundreds of billions of dollars in planned 2026 capital expenditures from hyperscalers is on a path to be a larger share of the U.S. GDP than peak investment for the Manhattan Project, the expansion of electricity, the Apollo space program, the construction of the interstate highway system, and broadband buildout during the dot-com bubble.

In a paper published by VPA today (see the coverage from Politico’s Morning Money newsletter), I argue that this massive level of investment, coupled with opaque financial engineering, means an AI market crash could lead to economy-wide, systemic consequences.

Many people compare today’s AI moment to the dot-com bubble, sometimes to minimize the risk. But it’s worth remembering the actual consequences of the bubble: 200,000 jobs were lost, thousands of companies went under, and the Nasdaq lost 80% of its peak-to-trough value and didn’t recover for 15 years.

An AI crash could be far worse. AI-related investment accounted for more than 90% of U.S. GDP growth in the first half of 2025. By one macroeconomic measure, the AI bubble is already 17 times larger than the dot-com bubble and four times larger than the 2008 housing bubble. And the financial arrangements underwriting all of this — circular equity financing, off-balance-sheet special-purpose vehicles, private credit, asset-backed securities — are complex, interlocking, and opaque. Just as a housing crisis in 2007 turned into a financial system crisis that hit every industry in 2008, it’s very easy to see how an AI crash would spread through the economy.

After the 2008 crash, Congress passed Dodd-Frank relatively quickly and without time for serious debate. Many liberals and even some conservatives mourned its shortcomings; for example, the Too Big to Fail banks have only gotten larger, more powerful, and more capable of causing global shockwaves should one go under.

As Ganesh Sitaraman and I recount in a new opinion piece in Time Magazine, we need to learn the lessons of the 2008 crash for the possible, coming AI crash. Perhaps most importantly, policymakers need to start identifying reforms now for what the response should be because once the crisis hits, it will be too hard to flesh out new ideas that genuinely address the structural issues in the sector. More specifically, policymakers should focus on helping families and individuals, not bailing out companies; identify and pursue structural reforms that actually solve the core issues in the sector; and prosecute those who engaged in fraud or illegal activities.

With respect to a possible AI crash, the paper lays out seven areas for Congress to act:

  1. Stop financial engineering proximate to the crash. Ban circular equity financing — where chip and cloud companies invest in AI companies that spend the money buying their products — a form of vendor-based equity investment at this scale that appears unique. Require full disclosure of debt financing deals, including off-balance-sheet special-purpose vehicles that currently hide more than $100 billion from tech companies’ books. End government subsidies that pit state and local jurisdictions against each other in a race to the bottom.

  2. Prosecute fraud. After major financial crises — like the Great Depression, the 1980s savings and loan crisis, the dot-com bubble, and the Covid-19 pandemic — prosecutors looked for and found major fraud that led to significant prosecutions and prison-time for those who perpetrated the fraud. After 2008, almost no one went to jail, which became a key flashpoint that inspired political backlash. If an AI crash involves fraud, law enforcers should, without fear or favor, investigate and prosecute fraud.

  3. Build a public cloud from stranded assets. When neoclouds and data center special-purpose vehicles go bankrupt, their physical infrastructure could be acquired at firesale rates to create a public option for cloud computing, operationalized through the National AI Research Resource and the Energy Department’s national lab system.

  4. Protect workers. Expand unemployment insurance and remove work requirements on safety net programs, as Congress has done in past downturns. Establish a Digital Works Progress Administration that matches displaced knowledge workers like software developers with the well-documented need in local and state governments. Limit workplace surveillance because, even where AI doesn’t eliminate jobs, it can degrade them.

  5. Separate algorithms from data centers: a Glass-Steagall for AI. Structurally separate software from hardware, just as the original Glass-Steagall separated commercial and investment banking to prevent the kind of systemic risk that accelerated the Great Depression. Without intertwined ownership structures, compute companies would make more rational, market-driven decisions rather than subsidizing AI development at a rate that might accelerate a crash.

  6. Regulate digital utilities and establish a new regulator. Chips, cloud computing, and foundation models have the features of markets traditionally regulated as utilities — high concentration, high barriers to entry, and natural monopoly dynamics. Congress should apply traditional regulatory tools to these markets and create a new regulatory agency to administer them.

  7. Ban extractive business models. Ban surveillance advertising, surveillance pricing, and surveillance wages before an AI crash drives companies to double down on them. The dot-com bubble was a key accelerant to the rise of surveillance advertising the first time around.

Instead of waiting for the crisis and then hastily developing insufficient policies, lawmakers should start preparing now. Meaningful reforms take time to formulate, and in a scramble, they get shelved for quick action. The time to debate these ideas is before the crash–not after.