At Davos 2026, Anthropic CEO Dario Amodei told a room full of the world’s most influential investors that AI would replace “most, maybe all” of what software engineers do within six to twelve months. A few hours later, Google DeepMind CEO Demis Hassabis took the same stage and said current AI systems are “nowhere near” human-level intelligence, and that we probably need “one or two more breakthroughs” before AGI arrives.
Both men run frontier AI labs. Both have access to roughly the same benchmarks, papers, and internal capabilities data. Yet their public forecasts diverge so dramatically that at least one of them must be either wrong or strategically misleading. The interesting question is which, and why.
Here’s what’s useful to know about the financial structures of these companies.
OpenAI completed its restructuring into a public benefit corporation in October 2025. The nonprofit, now called the OpenAI Foundation, holds a 26% stake worth approximately $130 billion, while Microsoft owns 27% at $135 billion. The company was valued at $500 billion after a $40 billion SoftBank-led round in March 2025, the largest private tech funding round in history. As of January 2026, Sam Altman is in the Middle East courting sovereign wealth funds for yet another round, reportedly seeking $50 billion or more at a valuation of $750-830 billion. OpenAI has committed to spending $1.4 trillion on infrastructure, and analysts project cash burn rising from $17 billion in 2026 to $35 billion in 2027 to $47 billion in 2028.
Anthropic, meanwhile, closed a $13 billion Series F at a $183 billion valuation in September 2025. Three months later, they signed a term sheet for another $10 billion at a $350 billion valuation, led by Coatue and Singapore’s GIC. That’s nearly doubling in four months. Their revenue run rate hit $9 billion by year-end 2025, up from $5 billion in August. Both companies are also reportedly preparing for potential IPOs in 2026.
Google, by contrast, generated roughly $140 billion in profit last year. Alphabet’s AI spending comes directly from operating cash flows. They have no external AI-specific investors to please, no valuation to justify to incoming limited partners, and no fundraise on the horizon that requires a compelling narrative about imminent technological discontinuity.
This asymmetry is worth holding in mind.
Sam Altman has set internal OpenAI goals for an “automated AI research intern” by September 2026 and a “true automated AI researcher” by March 2028. He’s written that “the takeoff has started” and that humanity is “close to building digital superintelligence.” Amodei, for his part, predicts models will reach “Nobel-level” scientific capability across multiple fields by 2026 or 2027, and has disclosed that some Anthropic engineering leads “don’t write any code anymore” because Claude does it for them.
These are extraordinary claims. They’re also extremely convenient claims if you’re trying to raise $50 billion from Middle Eastern sovereign wealth funds at an $830 billion valuation.
Every prediction of imminent AGI is, functionally, a pitch. “Six to twelve months until AI does all coding” translates to: the total addressable market for this technology is essentially all knowledge work, the timeline to capturing it is immediate, and if you’re not invested now, you’re going to miss the most important wealth-creation event in history. The more dramatic the capability forecast, the more urgency for capital deployment.
Even the doom-mongering serves the fundraise. Amodei has built his brand on being the “safety-focused” AI leader, warning constantly about existential risk. This might seem to cut against the hype, but it actually reinforces it: “We’re building something so powerful it might be dangerous, so you’d better fund the responsible lab that will do it carefully.” It’s a protection racket with good PR.
Sundar Pichai’s messaging is strikingly different. In November 2025, he told the BBC that no company would be “immune” if the AI bubble bursts, acknowledged “elements of irrationality” in the market, and warned users not to “blindly trust” AI outputs. Hassabis consistently places AGI five to ten years out, emphasises unsolved technical problems, and cautions that current systems lack fundamental capabilities in reasoning, planning, and continuous learning.
This is often framed as scientific humility or responsible leadership. Perhaps it is. It’s also strategically optimal for an incumbent that wants to see its VC-dependent competitors starved of oxygen.
Consider what happens if Pichai’s bubble warnings land. Investor sentiment sours on AI. Late-stage funding rounds get harder. Anthropic and OpenAI have to either accept down rounds or dramatically cut spending. Meanwhile, Google keeps writing cheques from its $140 billion annual profit, maintains its training runs, and acqui-hires talent from startups that couldn’t make payroll. The cautious rhetoric isn’t just hedging; it’s competitive strategy dressed up as prudence.
Hassabis’s “one or two more breakthroughs” framing works similarly. It implies that the competition is overpromising, that their timelines are unrealistic, and that the steady hand of DeepMind’s research programme is more likely to get there than the manic energy of the startups. Whether or not this is true, it’s certainly a useful thing for him to say.
What makes the cynical interpretation compelling is that nobody acts like they believe their own conservative forecasts.
Pichai warns about bubbles while Alphabet commits $75 billion in capital expenditure for 2025, almost entirely AI-focused. Hassabis says AGI is five to ten years away while DeepMind breaks ground on fully automated research laboratories in the UK. Google is not behaving like a company that thinks progress is slow or that current approaches are inadequate. It’s behaving like a company in an existential race that happens to have the luxury of downplaying the urgency in public.
Similarly, if Altman and Amodei genuinely believed their own timelines, you might expect some acknowledgment that their valuation expectations could be inflated by hype, or some caveat about the uncertainty inherent in forecasting technological discontinuity. Instead, the predictions get more aggressive with each funding round, and the confidence intervals get narrower even as the claims get larger.
None of this requires conscious deception or coordinated strategy. It’s simpler than that: people tend to emphasise the aspects of reality that serve their interests, and to genuinely believe the interpretations that benefit them most. Altman probably does think AGI is imminent; he’s been working on it for a decade and sees the internal progress daily. Pichai probably does worry about bubbles; he’s a public company CEO with analysts on every earnings call looking for reasons to downgrade.
The point isn’t that anyone is lying. The point is that when evaluating AI forecasts from AI executives, the correct Bayesian prior is not “neutral expert opinion.” It’s “statement optimised for the speaker’s capital structure.”
Altman needs you to believe AGI is coming fast so you’ll pay $830 billion for OpenAI. Amodei needs you to believe the same so Anthropic can close at $350 billion, nearly doubling its valuation in four months. Pichai needs you to feel uncertain so Google’s cash advantage compounds while its competitors scramble. Hassabis needs you to think the breakthroughs haven't happened yet, so the competition looks like it's overselling.
Everyone is talking their book. The truth is probably somewhere in the middle.