Net Neutrality for AI

5 min read Original article ↗

Over the course of 2025, “vibe coding” went from Silicon Valley pet projects to a global phenomenon. One major dictionary anointed the phrase its “word of the year.” In just a few short months, companies have reached more than $3 billion in annualized revenues from vibe coding or AI tools that aid software development. Major software companies appear to be slowing hiring, even as questions remain about whether and to what degree AI will replace or augment coders.

But 2025 wasn’t completely smooth for every vibe coding startup. Take Windsurf, for example, which was the second most popular standalone AI coding company in the world at one point. In April, media reports suggested that OpenAI was nearing a deal to acquire Windsurf for roughly $3 billion. But a few weeks later, before the deal was finalized, the company got unwelcome news from a critical supplier: Anthropic, OpenAI’s chief competitor, cut off Windsurf’s access to the Claude AI foundation models (FM) that Windsurf relied on. As FMs grow in importance, developers’ market power over these essential inputs—and the conflicts of interest they have with their own customers—could stifle innovation for AI applications across diverse industries.

Anthropic’s conduct raises two concerns. First, as an executive explicitly stated at the time, Anthropic deprived Windsurf of access to its FM because it decided that serving its customer could help OpenAI. Some might see that as the course of normal and perhaps even healthy market competition. But, second, Anthropic had an additional incentive to shut down Windsurf—Anthropic was beginning to offer its own coding agent, Claude Code, making the company a competitor to its customers. This conflict of interest has become even more pernicious in the months since the Windsurf saga, as coding tools become increasingly critical to Anthropic’s financial future.

Before we dive into the particulars, it’s worth taking a step back to understand why these FMs are so important—both to vibe coding and beyond. FMs have become essential infrastructure for the startup economy. Because FMs are prohibitively expensive to build, application developers routinely build upon leading models. For example, a very large portion of Y Combinator’s recent batches of startups are building AI applications atop FMs. Just three FM providers make up nearly 90% of FM API revenues: Anthropic at 40%, OpenAI at 27%, and Google at 21%, according to Menlo Ventures. That reliance, coupled with market concentration, means that AI FM providers have become gatekeepers—and that they can abuse their economic dominance to pick winners and losers among customers. In essence, without policy change, FM developers get a kill switch over any applications that compete with their in-house offerings.

This kind of market power is prevalent in many regulated industries where providers build upon foundational infrastructure owned by others, including telecommunications, transportation, and banking. In those cases, and others, policymakers have instituted neutrality or nondiscrimination rules that prohibit a company from abusing its economic position to discriminate on grounds of access, pricing, or terms among customers, in order to preserve contestability in the broader market.

As we argue in a new report published by VPA, now is the time for Congress to institute a simple neutrality rule to prevent this unfair and anti-competitive behavior before it takes root. We take inspiration from ‘net neutrality’ rules, first proposed by Tim Wu in 2003, which focused on encouraging competition and innovation by limiting the ability of broadband providers to discriminate among different types of internet traffic. Our proposed rule would prohibit AI FM providers that make an API available to external parties from unreasonably or unjustly discriminating among similarly situated customers, in terms of access, latency, cost, and quality of service—with exceptions for dealing with, for example, unlawful activity and security risks. That means that while FM providers might offer different levels of service on the open market, they cannot discriminate among consumers who purchase access within those tiers. So, in the scenario we described above, Anthropic would not have been able to curtail Windsurf’s access to its leading Claude API (or charge more, limit API calls, or otherwise degrade the offering) because of news that Anthropic’s competitors might acquire the startup.

The key benefit of a neutrality requirement is to enable innovation at the AI application layer. Today, the financial interests of AI FM providers—whether from investments in third-party AI applications, in-house development of their own applications, or the complicated web of investor-competitor-supplier dynamics that permeate the AI industry—present inherent conflicts of interest when serving customers with access to an API. When AI companies use their dominance in the FM market to pick winners among AI applications, they skew and hinder innovation in AI applications. Upstarts don’t have the competitive breathing room to compete if FM providers’ own applications get an unfair advantage.

Ultimately, a neutrality rule is a relatively light-touch way to protect the end-users of generative AI—startups and businesses, consumers, and the public—that benefit from healthy competition among applications they might use.