In February of 2026, the Department of War and Anthropic had a very public spat. It ended with the designation of Anthropic as a supply chain risk, after the DoW had threatened to invoke the Defense Production Act to compel changes to Anthropic's terms of service. While it's still not clear how the DPA could have been applied in this specific case (Lawfare has a great breakdown here), the mere threat kicked off a new round of discussion about something bigger: how much influence can the US government exert over frontier AI labs, and what happens when it tries?
This saga shouldn’t be interpreted as an isolated contract dispute. It reads more like the opening volley of what will likely be an intense, prolonged, and asymmetrically-leveraged negotiation between the US government and frontier AI labs over who controls what is widely believed to be an extremely consequential strategic technology. The government was never going to let this technology remain entirely in private hands and the labs were never going to hand it over without comment or condition. The question isn’t whether this negotiation happens, it’s how it plays out, and what survives.
I’m not qualified to map the precise legal mechanisms the government currently has at its disposal, and I’m not going to try. I suspect the legal and constitutional questions here will become some of the most interesting policy fights of the next decade. My working assumption in this post is simpler and more structural: when the stakes are high enough, the government can change the rules. It can rewrite law, invoke emergency powers, and exert eminent domain over technology and infrastructure it deems critical to national defense and sovereignty. The specific text of the DPA as of February 2026 matters less than the fact that a sufficiently motivated sovereign can reshape the legal landscape to maintain its authority.
What I want to explore is what happens after the government decides to act, examined from the perspective of someone who spent a few years working on the technical side of a frontier AI lab and has some sense of what makes these organizations tick, and why they’re fickle and temperamental beasts.
The Defense Production Act is, historically, a tool for leaning on heavy industry. It was born in the world of steel and ships during the Korean War, then later used to jumpstart domestic aluminum and titanium capacity. It was invoked to redirect medical supply chains during COVID, and most recently to build domestic supply chains of strategic minerals. The pattern is consistent — the government redirecting physical capital and manufacturing inputs according to national priorities, usually by putting its thumb on the scales of the market and making it profitable for private actors to do what wouldn’t otherwise make economic sense.
Today’s frontier AI labs — the three or four private organizations capable of producing state-of-the-art AI models — have more in common with traditional heavy industry than most people realize, and the resemblance is growing. They control enormous amounts of physical capital: land, buildings, electrical grid capacity, behind-the-grid gas turbines, and raw computing power. This capital accumulation far exceeds the norm for even large software companies and is forcing record-breaking capital investments at US hyperscalers, some of the most valuable companies in existence. The analogy extends beyond capital. AI labs take large quantities of raw inputs in the form of datasets and transform them through energy-intensive processes into high-value outputs — AI models — which are then deployed as foundational infrastructure for other parts of the economy. Large capital expenditures, massive-scale transformation of raw material, producing higher-value outputs that the rest of the economy relies on. The similarities with steel mills are real.
But capital-intensive transformation of raw inputs is only one piece of a much more complex picture. The frontier — the small slice of AI models that top benchmarks at a given moment in time — is characterized by rapid oscillation between a small collection of the most well-funded and aggressive AI labs, generally understood to be OpenAI, Anthropic, Google DeepMind, and maybe xAI. Various leaderboard positions swap between these labs on the order of weeks if not days, and as of early 2026, US labs at the frontier have a ~7 month lead on their Chinese competitors.
For some military applications, being behind the frontier is fine — defense often prioritizes hardening and reliability over bleeding-edge specs. But for strategic domains, like cyberwarfare or scenario planning, where small intelligence advantages compound rapidly, the frontier is what matters. And staying at the frontier is where things get complicated.
The primary difference between a frontier AI lab and something like a steel mill is that there’s no formula for success in AI. Not only do labs lack a well-defined recipe for producing advanced models, but the target of these recipes is constantly shifting in response to research advancements and competitive dynamics. That’s the thing about the frontier — it’s sparsely explored, poorly defined, and reliant on imperfect benchmarks. A depressingly large amount of any lab’s success depends on luck: the particular set of YOLO’d hyperparameters used on a training run, faith that trend lines hold while scaling experiments through a mini-series of training runs, or making critical decisions under extreme time pressure based on vibes and reading the tea leaves. A surprisingly common feeling inside the labs is that an experiment produces a miraculous result for inexplicable reasons, or an otherwise unremarkable training run fails to live up to expectations and requires intricate dissection just to understand what went wrong. These bets compound over time. OpenAI’s early investment in the reasoning paradigm still shows up in their dominance on math and science benchmarks, while strategic choices like whether to stick with dense Transformers or invest in sparse MoE architectures shape infrastructure commitments, team expertise, and what a lab is even capable of trying a year later.
Perhaps most importantly, talent is remarkably scarce and non-fungible within AI. The knowledge of a handful of specific researchers makes the difference between being at the frontier or middle of the pack, and this expertise is split across a wide variety of domains (multimodal, architecture, optimization, RL details, alignment, numeric stability, etc.). The difference between a 75th percentile AI researcher and a 99th percentile researcher might be a 50x difference in total compensation. While talent is non-fungible, it is extremely mobile — California’s lack of non-compete agreements means researchers can hop between labs at will. From a wider perspective, there’s a je ne sais quoi synergy to specific configurations of researchers, leadership, tertiary resources, and mission specifics that makes the difference between a top team and an expensive pool of talent that can’t quite figure out how to deliver results. Meta famously poached top researchers from DeepMind and OpenAI with pay packages reaching into the dozens of millions of dollars, but still hasn’t managed to regain a position alongside top US labs.
The Meta example reveals something structural about this workforce, which is that these aren’t purely rational economic actors. OpenAI was founded as a non-profit specifically to keep AGI development outside of traditional economic incentive structures, and still attracted top talent without offering meaningful economic upside at the time. Anthropic was founded when senior OpenAI employees defected over disagreements about safety, a decision that looked economically irrational before it turned out not to be. This ideological streak runs deep, and it’s reinforced by the fact that a surprising fraction of core researchers at frontier labs are already deca- or centi-millionaires. For these individuals, the marginal million dollars matters less than mission, prestige, and proximity to interesting problems. The landscape is evolving as these companies grow rapidly and new entrants join with different incentive profiles, but researchers who matter most for frontier capability tend to be exactly the people with the most options and the least financial pressure to stay.
Finally, and perhaps least intuitively to those who haven’t worked inside these labs, the frontier is held together by a set of interlocking flywheel effects which amplify small differences and turn these differences into insurmountable leads. The first flywheel comes from product usage. Users converge on products powered by the best models, and that usage generates the training data for the next generation of models: preference signals, edge cases, interaction traces, feedback that gets recycled directly into post-training. The second flywheel comes from fundraising. Usage turns into revenue, revenue supports valuation, valuation makes it easier to raise, and fundraising buys the scarce inputs that determine the next cycle: compute, infrastructure, talent, and data acquisition. These flywheels exist in some form across all VC-backed software, but they’re especially vicious at the frontier because development of frontier models is both extraordinarily data-hungry and extraordinarily capital-intensive.
The final flywheel, and in my opinion the most important but least legible to those outside of the labs, is the nascent intelligence flywheel. The day-to-day work at frontier labs is overwhelmingly software development. Having a better coding model improves programmer productivity, which allows them to make better products and improve research tooling, which in turn allows them to get the next iteration of the model out, which feeds back into the flywheel. This is where AI is so different from any other industry — building better steel doesn’t allow you to build better steel, but it takes intelligence to make intelligence, so at some point you begin to get compounding effects (this is well short of any recursive self-improvement or takeoff scenario). A useful example is the recent reporting that xAI staff were using Claude internally via Cursor for software development until Anthropic cut them off — a sign that even being “a few weeks behind” on coding assistance can matter at the frontier.
As we’ve seen, it’s best to think of these labs as unique entities which require not only the capital of heavy industry, but the chemistry and interpersonal dynamics of professional sports teams. They’re fragile, competitive, ideologically driven organizations where iteration velocity — the speed at which research insights become the next model — is everything. And governments are unlikely to let them accrue power unchecked.
What happens when these two realities collide? This section explores that collision through a simplified escalatory ladder: a lab tries to hold functional independence while the government escalates toward higher-leverage compulsion. I’m not doing legal analysis, there are genuinely fascinating constitutional questions here that others are better equipped to examine. My working assumption is simpler: when the stakes are high enough, the government finds a way to get what it wants. The question is what it costs.
What follows is a thought experiment: an extension of the real dispute into a scenario designed to trace escalation dynamics to their conclusion. The specifics are invented; the mechanisms are real.
The ladder is how the government escalates: obtaining access, setting direction, taking control.
Access to a Checkpoint
The situation starts off much like the recent Anthropic vs DoW dispute: over particular line items in a contract. The government wants to use models or AI systems it already has access to, but with certain restrictions removed. The lab considers the request and refuses, setting the ground for an initial round of escalations.
The government has many tools at its disposal here, and one early move is procurement pressure via contract priority (DPA Title I–style logic, priority ratings, etc). The result is that the DoW receives access to an existing AI system on terms it dictates.
From a strategic point of view, that victory is a depreciating asset. Within a quarter, the system is no longer frontier among closed-source alternatives. Within the year, it risks being behind what competitors have access to and behind what capable hobbyists can run locally. The lab walks away bruised but intact: enterprise business continues, investors are happy to see open tensions pause, and ideologically motivated employees begin preparing for future conflict in ways that give the company more leverage.
Access to the Latest Model
Unsatisfied with the prospect of using mechanized intelligence that’s falling behind the open market, the DoW returns and requests access to its latest leaderboard-topping version of Claude.
Anthropic agrees to provide access to its latest model, but the contract is notably missing the contested terms of service from the previous round. As the government begins deploying Claude internally, they notice behavior that differs from their expectations: requests are being rejected, not by a surrounding classifier system, but by the model itself. Claude is refusing to engage with requests that it interprets as having the potential to be used in automated kill chains. It’s refusing requests to classify packets that appear to contain English-language email and phone intercepts, arguing that it can’t be sure these aren’t used for domestic surveillance purposes. Importantly, these refusals are baked into the weights.
Directional Control
The DoW goes back to Anthropic and demands an updated model. They want one with the same intelligence profile but with the refusals removed. Anthropic refuses to comply — they have no product which matches the requirements of the DoW, and claim that they can’t be compelled to produce a product that doesn’t even exist. They claim that since alignment is a complex step of the end-to-end model creation process and that the relationship between intelligence and alignment isn’t fully understood, the DoW can’t even properly specify what this product would look like from a technical point of view.
The situation escalates into genuinely murky legal territory. Even for standard products with concrete definitions — ventilators, N95 masks — it’s unclear whether existing statute allows the government to compel production of new products. For LLMs it’s worse: the government can’t cleanly specify training outcomes, and some commentators argue that compelled changes to model behavior could raise novel constitutional questions. This is where the legal analysis gets genuinely fascinating, and also where I’m going to bracket it. My contention is that the government will find a way. After some legal showdown, the lab is compelled to produce a fine-tune removing the alignment behavior the DoW finds unacceptable.
The government got what it asked for. Now let’s look at what it cost.
The spiral is what happens once coercion touches the iteration engine: cycle time slips, reorg tax eats attention, repricing changes incentives, bailout traps spring, and the flywheels start running backward.
The Invisible Slowdown
An invisible slowdown sets in, unnoticeable to the government and subconscious to most employees. These labs are not just run with tacit knowledge, they’re run with tacit expectations of hard work and tacit understandings of how to work effectively. As the government tightens the thumbscrews, employees react in subtle but caustic ways. Late nights become less frequent, communication becomes slower and more formal, and CYA documentation substitutes for proactivity and high agency. The fine-tune is delivered, but it takes twice as long as a normal turnaround. The slowdown has begun.
First Departures and the Reorg Tax
Soon, the first departures hit. It’s not a mass walkout, but three or four load-bearing individuals over the course of a month. It probably starts with senior alignment researchers: people with strong convictions, the economic security to leave equity on the table, and whose work was most directly impacted by the compelled fine-tune.
Senior departures force expensive reorgs in critical research teams, and those reorgs create a tax on the entire company. Tacit maps disappear. Morale weakens. New owners have to be onboarded to brittle systems. Leadership attention cycles get eaten away. This slow down is now extending beyond just the government project — all of post-training, alignment, safety are slowing down and research leadership with it.
The Fundraising Flywheel Flips
The financial architecture of frontier labs is surprisingly fragile. These are some of the most hyped companies in the history of Silicon Valley, and they trade at extraordinary valuations — 25x multiple on revenue and far from profitability — built on the assumption that the iteration cycle will continue to accelerate. At the same time, they’re sitting on 10- and 11-figure compute purchase obligations, contracts signed under the assumption that future fundraising would come easily.
Then the market is forced to price a new category of risk: political risk. Public confrontations with the executive branch and high-profile departures of senior researchers are enough to trigger repricing. The IP that helped justify valuations looks less secure once investors seriously consider the possibility that it can be requisitioned by the government. The fundraising flywheel that powered the lab’s ascent now begins to work against it: political risk compresses multiples, compressed multiples lower valuations, lower valuations make it harder to raise, and difficulty raising makes forward compute commitments look precarious.
And crucially: the government has yet to force it out of a single contract, has yet to seize a single GPU, but the threat of government involvement begins starving the lab of the resources it needs to meet existing obligations and keep the model iteration cycle churning.
The Bailout Trap
As the financial picture deteriorates, the government faces a familiar dilemma. Anthropic isn’t a lone startup at risk of failing; its cap table reads like a list of companies propping up US GDP growth, and the compute commitments are underwritten by the same hyperscalers building the backbone of the US cloud economy. Anthropic failing would send shockwaves through the companies investors are counting on to justify the entire AI race.
So the government does what governments do: it steps in, escalating the nationalization sequence. It underwrites loans and guarantees compute commitments.
This involvement comes at a price. The government demands board seats, reporting requirements, more processes, more oversight. Underwriting terms include obligations for deeper involvement in classified government projects. Classified work brings clearance requirements, fragmenting the workforce and introducing new communications barriers within the company. The government gets its control, but this control introduces bureaucratic friction that is incompatible with staying at the frontier. Claude’s development now moves at the speed of Lockheed Martin.
Reverse Flywheels
While existing obligations are underwritten, investor hype plummets and valuation multiples return from the stratosphere. Employee equity is underwater and comp packages become less competitive. People who joined for financial upside begin looking for exits, and competitors sense blood in the water. The lab has now lost employees from both ends of the motivation spectrum: the mission-driven and the money-driven.
Enterprise customers begin pricing in political risk and sense sluggishness in the development cycle, prompting them to look elsewhere when contracts come up for renewal. Consumers notice the product falling behind alternatives and start churning. The data flywheels that kept the lab at the frontier begin to break down. Usage drops, leaderboard positions sink, and this feeds back into the valuation cycle. The flywheels are now working fully against Anthropic.
End State
The end state is stark: the government achieves control over a company that is no longer at the frontier. The frontier moves elsewhere, to domestic competitors, to foreign labs, or diffused into open source. Investors reel from the speed of wealth destruction, the defense establishment falls further behind the intelligence curve, and the economy absorbs the shockwave.
This was an intentionally simplified scenario: a single less-than-perfectly cooperative company coming under increasing coercive pressure from the government. The point was to trace one plausible path through to completion, emphasizing the complexity of frontier dynamics and how tacit organizational capacity can make coercion self-defeating. But the real world is messier. The frontier is defined by a handful of hyper-competitive labs with different incentives and particular skills. What happens when the government stops treating this as a single-company dispute and starts shaping the frontier as an industry, rewarding compliance, punishing resistance, and implicitly choosing winners?
At the industry level, AI labs fall into a rough 2x2 categorization. Along one axis will be lab capabilities — is this lab at the frontier, can they remain there, what’s their trajectory, what’s their capital allocation, etc. Along the other axis we have lab compliance, i.e. mission alignment with the USG — is the lab ideologically constrained, how cooperative will they be with the government’s requests. As we just saw, even if you assume the government has extreme leverage, subtly uncooperative labs can be very difficult to wrangle.
The government wants the top-right quadrant: frontier capability and willingness to provide the services it wants. For these thought experiments, assume no one starts there. If someone did, the game collapses. The bottom-left quadrant is uninteresting and we’ll ignore it.
That leaves the two interesting cases: frontier-capable but ideologically constrained (top-left), and willing but not frontier-capable (bottom-right). The government’s job becomes straightforward to state and hard to execute: move actors into that top-right box. You can either take a willing-but-behind lab and try to raise its capabilities, or take a frontier lab and try to bend its compliance.
So what levers of power does a sufficiently motivated government actually have?
At a high level, I see six. The interesting question isn’t whether these levers exist. It’s what they break when you pull them.
Physical Capital Reallocation
Given how capital-intensive frontier labs have become, capabilities are increasingly defined by access to physical capital: chips, megawatts, datacenters, and the infrastructure that keeps them running. The lightest-touch intervention is contractual, leaning on chip suppliers to reshuffle delivery schedules or pressuring grid operators and permitting bodies to reprioritize capacity in ways that favor more compliant labs. Moving up the stack, the government can go beyond queue-jumping and start seizing and reallocating existing assets, from behind-the-grid gas turbines and cooling towers all the way up to a fully operational datacenter and the land it sits on. This is an old-school nationalization move and it’s high leverage, but the timing is lumpy: chip orders play out over months (or years), while repurposing existing capacity can happen on the order of weeks, assuming the receiving lab can actually stand it up and use it.
Digital Capital Reallocation
Datasets, synthetic data pipelines, weights access, distillation rights, training routines, architectural details, scaling law refinements, eval harnesses: this is the uniquely “AI-shaped” lever, and it can move faster than moving steel. The catch is that digital capital is path-dependent and leaky. A competing lab immediately knows how to integrate new GPUs. Integrating architecture-dependent algorithmic improvements might be trickier, and much of highest value IP is tacit knowledge which, by definition, can’t be seized and transferred.
Money and Underwriting
Subsidies, prepaid contracts, loan guarantees, indemnification, liability shields: the state can make cooperation cheaper and safer. This is usually the fastest lever to pull, but it’s rarely the highest leverage, because frontier capability is bottlenecked on talent, iteration velocity, and hard infrastructure rather than cash alone. And underwriting tends to come with strings (audits, reporting, classified workstreams) that quietly slow the cycle time it was meant to accelerate. You can buy a lot of activity this way, but not necessarily the frontier.
Governance Control
Most of the levers above focus on boosting the capabilities of cooperative industry partners. If the government wants to attack from the other direction, i.e. making a highly-capable lab more cooperative, the main tool at its disposal is governance. This means influencing who actually holds power inside the company. The path there can be indirect and ugly: investigations, compliance overhead, and nationalization threats can drag down valuation until a bailout-for-board-seats deal becomes plausible, after which the board can replace a non-compliant CEO and begin redesigning the organization from the top.
The problem is that the people who do the low-level work which allows a company to remain competitive can’t be compelled to stay, or be compelled to try their best. You can install a favorable leadership team; you can’t force a researcher to show up and sprint 80 hours a week. That makes governance control high-leverage on paper and unusually likely to blow up the frontier engine in practice.
Talent Reallocation
While the government has limited direct leverage over talent, it can manipulate incentives. The most obvious pressure point is that frontier labs employ a meaningful share of people on visas (e.g. O-1 or H-1B) or with dual citizenship. One move is to impose clearance requirements on certain AI workstreams and then selectively route exemptions or carve-outs to preferred industry partners, fragmenting the labor market in a way that nudges talent toward “approved” labs. Another is visa leverage itself: selectively speeding renewals, tightening eligibility, or creating fast lanes for compliant partners. This allows the government to turn immigration bureaucracy into a talent allocator.
Regulatory Coercion
Finally, the government can selectively construct regulation that benefits certain industry partners while punishing non-compliant labs. The space of possible moves is vast: add new burdens to “shadow grids” while smoothing the path for labs tied into the public grid; tweak environmental rules in ways that quietly privilege certain power mixes (solar/hydro vs natural gas); adjust tariffs and export controls so specific datacenter inputs become cheaper for one class of hardware than another (TPUs vs Nvidia GPUs); add new permitting layers and then bias who gets processed first. The details here get gnarly fast and are well beyond my expertise, but the shape of the lever is clear.
The important thing about regulatory coercion is that it’s generally destructive. It’s good at slowing or kneecapping non-compliant actors, but it doesn’t easily increase the absolute capability of the labs the government wants to win.
A motivated government has considerable power at its disposal to influence the landscape of competition within AI. The problem is that the dynamics at play are fickle and chaotic. Every new lever introduces new problems along the way.
The most immediate problem is value destruction. The AI industry has produced trillions of dollars of new market cap and raised more than $100B in funding on the back of sky-high investor expectations and finely balanced competitive dynamics. It doesn’t matter which lab becomes the target of government coercion, the mere demonstration that nationalization is on the table becomes a sector-wide repricing event. Physical capital seizure renders product roadmaps useless. Digital capital reallocation collapses valuations that depended on that capital’s scarcity. Toying with visa-based researchers’ employability may prove more damaging than any of the above. Once-confident investors head for the exits, and hundreds of billions in forward infrastructure commitments come under scrutiny as fundraising falters across the board.
There’s a subtler distortion at work too. The things that actually matter for the frontier, e.g. iteration velocity, tacit knowledge, who is likely to be near the frontier in six months, are precisely the things the government will struggle to observe. What the government can measure are benchmark positions, compliance checklists, audit trails, and eagerness to agree to terms. As the government becomes a dominant customer and power broker, the incentives facing labs start to warp. Anyone familiar with defense procurement knows the pattern: contractors get chosen for reasons that aren’t reducible to raw capability i.e. where factories create jobs, whether revenue streams need bolstering to maintain an industrial base, which programs have congressional champions. The risk is that something similar starts to infect the AI frontier, with labs optimizing for government legibility at the expense of the underlying iteration engine.
How much this matters depends on how economically important AI remains outside of government. If commercial and enterprise demand stays strong, the market will continue selecting for real intelligence and utility, limiting how far the distortion can go. But at the margin, government-specific incentives can still bend things in strange directions.
The Shrinking Leverage Window
There’s a countervailing dynamic here that the rest of this piece is in tension with. Everything above assumes that frontier labs are fundamentally dependent on irreplaceable, mobile, ideologically motivated researchers, and that this dependence is what makes them resistant to coercion. While that assumption holds today, it’s one the labs themselves are actively trying to invalidate. Automating as much of the R&D pipeline as possible is a major priority at every frontier lab and the results are already starting to show. xAI’s MacroHard project is one public example, a direct attempt to automate large swaths of software engineering and operational work that currently requires human judgment. Every frontier lab is pursuing some version of this.
Nobody knows how far or how fast this goes. The transition probably follows a rough gradient. Support, operations, and finance roles compress first, and many of these are already partially automated. Software engineering follows as coding models improve, and this piece itself argued that even marginal improvements in coding assistance matter enormously at the frontier. Whether the core researchers and alignment teams whose tacit knowledge currently defines the frontier can be meaningfully automated is a genuinely open question, one that I don’t think anyone has a good answer to yet. It may be that the last 50 people in the building are the ones whose judgment simply can’t be replicated and that those 50 people retain enormous leverage indefinitely. Or it may be that the definition of “irreplaceable researcher” keeps shrinking faster than anyone expects. Nobody inside the labs knows either.
What this means structurally is that the balance of power inside these labs is slowly migrating upward. As the base of the organization automates, decision-making concentrates in a smaller and smaller executive layer, and this connects directly to the question of coercibility. The government is quite good at handling executives. Board seats, leadership replacement, regulatory pressure, personal liability, these are tools that work on individuals in ways they simply don’t work on a distributed research organization with hundreds of load-bearing people. A lab that depends on 200 irreplaceable researchers is a temperamental beast that breaks when you squeeze it. A lab that depends on 15 executives overseeing increasingly automated R&D pipelines starts to look a lot more like a defense contractor, and the government has decades of experience managing defense contractors.
The uncomfortable implication is that the employee leverage underpinning this entire analysis may itself be a depreciating asset, possibly the most consequential one in the whole framework. The question is how quickly it’s happening, to what extent it will go, and whether the government can resist squeezing hard enough to break things in the interim.
note — thanks to Herbie Bradley on twitter for highlighting this dynamic
Attractor States
Where does all of this settle? I can sketch four rough attractors the system might converge toward, though reality will probably be messier than any of them.
In one version, the government picks a champion (probably a willing-but-lagging lab from the bottom-right quadrant) and begins routing resources and preferential treatment its way. Competitors, unable to compete against a state-backed rival for contracts and resources, begin to run for the exits. The champion gets its resources but loses the competitive pressure that drove rapid iteration which kept the frontier stable in the first place. As the frontier slows under decreased competition, the US lead quietly decays.
In another, multiple labs scramble to occupy the magic quadrant simultaneously, competing not to build the best model but to be the most workable government partner. Because AI will remain broadly economically important, strong commercial incentives will persist to optimize for true intelligence and utility. In other words, this isn’t a world where capability stops mattering. But those incentives get warped by the weight of government-specific priorities. Compliance becomes a meaningful part of the product. Safety norms bend toward whatever the current administration wants to hear. The frontier stays domestic but the gap between what the government sees and what the frontier actually is grows increasingly noisy.
In the third, talent and capital simply leave. The frontier migrates to open source, to offshore labs, to jurisdictions that aren’t trying to control it. The government ends up with access to good-enough models on its own terms, but the lead over international competitors evaporates, killing the strategic advantage it was trying to capture in the first place. Not through espionage or export control failures, but through American researchers deciding they’d rather work somewhere else.
In the fourth, and perhaps most likely over longer time horizons, the government simply waits. As automated R&D reduces the labs’ dependence on irreplaceable human researchers, the leverage dynamics that make coercion self-defeating gradually dissolve. The executive layer gains power relative to the research workforce, governance control becomes viable without triggering the mass departures and flywheel reversals described above, and the labs start to resemble something the government’s existing institutional toolkit can actually manage. This is nationalization by patience rather than nationalization by force. The frontier doesn’t break because it was squeezed; it becomes controllable because the thing that made it fragile, namely, its dependence on people who could walk away, quietly automated itself out of the equation.
All four paths carry risks for the US lead over international competitors, though in different ways and on different timescales. The first three shrink it directly, through reduced competition, distorted incentives, or talent flight. The fourth is more ambiguous: if the government exercises restraint and lets the automation transition play out, it may eventually achieve workable control over labs that are still at or near the frontier. But “restraint” is doing a lot of work in that sentence, and the political incentives all push toward action now. This is the deepest irony of the entire nationalization impulse. The US government is spending enormous diplomatic and economic capital on chip export controls — restricting advanced GPU shipments, pressuring TSMC and ASML, policing smuggling networks — all to protect a lead over Chinese AI labs currently measured in months. And then it turns around and disrupts the domestic labs that produce that lead. Every month of premature domestic chaos is a month the gap closes. The government is doing to itself what it’s spending billions trying to prevent China from doing.
What happened between the Department of War and Anthropic in February of 2026 was not a contract dispute that got out of hand, it was the opening act of something much larger. This was the first visible collision between a government that will never allow private actors to hold power over a civilization-altering technology, and a set of organizations that were built, from their founding, on the conviction that how this technology is developed matters as much as whether it’s developed at all. That collision is not going to resolve itself in a single news cycle, or a single administration. It is the beginning of a negotiation that will shape the trajectory of American technology, American power, and possibly quite a lot more, for the next decade and beyond.
With that in mind, this piece is not meant to be a prediction, it was meant to roughly begin the process of sketching out the dynamics of what’s at play. These relevant dynamics are the structural features of these labs that make them unlike anything the government has controlled before, the escalation patterns that emerge when coercion meets an organism built on agility and trust, and the industry-level chaos that follows when the government starts pulling on levers of power to exert control over the entire ecosystem. Much of the specifics here will turn out to be wrong, just as the details always are. The point wasn’t to get every detail right. The point was the start the process of building a shared frame of reference for a conversation that’s going to dominate the next few years.
I deliberately waved away enormous areas of complexity — constitutional law, procurement doctrine, international trade dynamics, the internal politics of these labs. It’s not because they don’t matter but because they each deserve serious treatment from people actually qualified to give it. I’d love to hear from those people. The lawyers and policy scholars who can tell me where I’ve overstated state power or underestimated legal friction. The economists who understand how political risk actually propagates through capex-heavy industries. The defense procurement veterans who know where my model of bureaucratic friction is naive. Other lab employees who can complicate or correct the picture from the inside. I’m trying to build a shared frame here, not win an argument.
It was always an inevitability that the government would try to exert control over frontier AI. The problems arise when the government begins exerting control without understanding that the frontier is a living process, not an asset.
AI disclosure — ChatGPT assisted with research related to lab fundraising, DPA history, and legal precedent. Claude helped with editing and gave feedback on the initial outline. NanoBanana from Google produced two of the diagrams here, though they were originally mocked in Excalidraw.











