I’ve been building software for long enough to recognize the pattern we’re in right now. The Super Bowl ads this weekend are going to be packed with AI companies spending millions per 30-second spot. Companies burning billions in losses will pay millions to tell you their product is revolutionary. This is end-stage bubble behavior, and if you’ve seen it before, you know exactly what comes next.
Twenty-six years ago, almost to the day, I watched the same thing happen with dot-coms. January 2000 felt exactly like January 2026 feels right now. An internal essay called “bubble.com” had leaked through the startup community, laying out in detail why tech valuations were insane and a crash was inevitable. Things felt tenuous. People were worried. Then came the Super Bowl with so many dot-com commercials they got their own Wikipedia page. The most memorable was E-Trade’s monkey dancing in a garage with the tagline, “Well, we just wasted $2 million. What are you doing with your money?”
Two months later, the NASDAQ peaked. Over the next eighteen months, it lost three-quarters of its value. It wouldn't recover those highs for fifteen years. The Super Bowl ads weren't the cause of the crash, but they were the warning sign. When companies with unsustainable business models spend absurd sums on advertising, they're not building for the future. They're desperately trying to stave off the inevitable. I have relived this downfall while working at Vroom, where I was almost part of 800 people laid off (I quit just two weeks before this happened). Vroom, the used-car sales platform, had some success that even accelerated during COVID, so we all thought we were doing great. The layoffs came two years after that. Vroom announced on January 22, 2024, that it had halted all purchases and sales of used vehicles.
The Uncomfortable Parallels
The current AI situation has the same discordant feel. We know LLMs have plateaued. Industry insiders are admitting it’s time to return to research mode rather than pretending we’re six months from AGI. The valuations don’t make sense. The entire foundation model ecosystem is subsidized by venture capital that expects returns that simple math says can’t happen.
But here’s the part that should terrify investors: even the growth story doesn’t work. The problem with LLMs is that more users require exponentially more compute. This isn’t a situation where scale improves margins. We’re supposed to be in a “growth stage,” but growth itself is the problem. Every new user requires more GPU, more power, more infrastructure. The unit economics get worse as you scale.
This is where the infrastructure bottleneck becomes critical. The industry promises massive new data centers to support this growth, but those data centers aren’t actually getting built. There’s physically not enough power being generated to supply them. The electrical grid can’t support the computational demands these companies are placing on it. There’s nothing in the timelines or infrastructure plans that suggests this will change within the window these companies need it to. We’re looking at a fundamental physical constraint that no amount of venture capital can overcome.
Meanwhile, the usage metrics are collapsing. We’re seeing usage decline by up to 20% across major platforms. Consumers aren’t using these tools the way the companies promised they would. Businesses aren’t seeing the promised ROI. Every benchmark assumes hockey-stick adoption, but the real-world adoption curve is flattening. Companies are burning cash at rates that make profitability mathematically impossible without an engineering miracle or a physics breakthrough.
There's a perfect example of the desperation I'm talking about. Anthropic ran ads mocking OpenAI for - ironically - running ads and using paid marketing to prop up their user base. Anthropic positioned itself as the principled alternative, the company that wouldn't resort to such desperate measures. Except that positioning itself is a desperate measure.

And yet, a social network exclusively for AI agents to communicate is going viral. Moltbook is fascinating as a technical experiment, but it’s also a symptom of how disconnected the AI hype is from economic reality. We’re building infrastructure for AI agents to socialize before we’ve figured out how to make the agents reliably useful, let alone profitable.
This weekend's Super Bowl will feature a surge of AI commercials. Companies hemorrhaging money will try to convince you they're the future. It's the same playbook as 2000, with a similar likely outcome. The crash may take months, even years, to unfold, but the direction is clear.
The Fundamental Problem: Nobody Is Making Money
Here's what really haunts the AI industry right now: NOBODY in this field is making money. Not a single company. Every major player is spending fortunes with no legitimate path to returns. Everyone is hoping someone, somewhere, figures out how to turn any of this into actual profit.
The middle management problem accelerates this. Somewhere in your company right now, there’s a middle manager who knows “AI” is the buzzword. Executives are pushing everyone to integrate AI, use internal LLM tools, and discuss AI strategy on earnings calls. But eventually - and this day is coming - someone is going to want to actually see the ROI. They’re going to ask: “How much did we spend on AI infrastructure and training, and what did we get back?” And the answer, for the majority of applications, is negative.
Big tech companies are pushing everyone to use internal LLM tools, and the economics are insane. Every time an employee uses an internal LLM tool, the company pays dollars (or at least cents) to generate responses. That cost is buried in infrastructure budgets and ignored for now. But what happens when AI stops making the stock price jump? What happens when these companies are still spending 50 cents every time someone uses an internal LLM tool, but the market no longer rewards them for having AI?
We're at the Wile E Coyote moment. The coyote has run off the cliff and is still going, legs pumping in the air.
The timeline is the question nobody can answer. Some of us think Q2 2026 is the inflection point. Others argue for 2027. The real question is how long investors will keep throwing money into the fire with no return. There's reportedly about $100 billion in VC funding left worldwide this year. OpenAI has promised $1.4 trillion in data centers. OpenAI has raised about $60 billion. That's just one company, and they're already spending three times what the entire VC ecosystem could fund.
What Happened After The Dot-Com Crash
Here’s what most people forget about the post-2000 period. The crash was terrifying while it was happening, but what came after was actually worse for developers in the short term. Not only did VC funding dry up and startups fold, but a massive wave of offshoring hit at the same time. Companies that survived the crash looked for ways to cut costs, and moving development overseas was the obvious choice.
For a couple of years, it felt like software development jobs in the US were just gone. Companies were closing positions because the money had dried up, and the positions that did exist were being moved to offshore vendors. It was a double hit that made the job market brutal. Many developers left the field entirely during this period.
But then something interesting happened. The offshore code started coming back. It turned out that, for various reasons (immature foreign firms, communication barriers, high turnover at overseas providers), the code quality wasn’t up to par. US companies were getting deliverables that didn’t meet requirements, had serious bugs, or couldn’t be maintained. They needed developers to clean up the mess.
Some of us made substantial money on those cleanup projects. The offshore experiment had failed not because the developers overseas were incompetent, but because the model of shipping requirements across twelve time zones and expecting production-quality code back was fundamentally flawed. Complex software needs context, communication, and iteration. You can’t offshore that effectively when the cost savings depend on minimal interaction.