The Trump administration moved to designate Anthropic a supply chain risk. The move was supposed to cut off a company the government had decided was incompatible with defense work. Two months later, the designation is legally tangled, operationally hollow, and commercially surrounded by a coalition of more than forty organizations that effectively *are* the defense industrial base.
The mechanism is straightforward in theory. The Defense Department designates a company as a supply chain risk. DoD contractors are then prohibited from using that company’s technology in work touching government systems. The company is frozen out.
The problem for the blacklisters is that Anthropic has spent two years becoming load-bearing. Project Glasswing, announced April 7, 2026, is the most recent expression of that strategy. The consortium includes Microsoft, Apple, Google, Amazon Web Services, Nvidia, the Linux Foundation, Cisco, Broadcom, and more than forty other tech, cybersecurity, critical infrastructure, and financial organizations. The member list is a cross-section of the commercial infrastructure the defense sector runs on.
The arrangement is operational, not cosmetic. Anthropic provides its Mythos frontier model to the consortium for adversarial testing against real critical software stacks. Findings flow through partner security platforms into enterprise security workflows. The Linux Foundation provides a governance layer for responsible disclosure standards. This is not a lobbying front. This is a product flow.
The implication for the DoD blacklist is direct: if CrowdStrike runs Mythos-derived findings in its products, and CrowdStrike is DoD-compliant, then Anthropic is inside the defense supply chain by definition. The ban does not remove the dependency. It removes the visibility into it. The DoD ends up with less information about what its own contractors are running.
The Pentagon made the designation under two separate statutory authorities. The first, 10 U.S.C. § 3252, was the narrower track, it blocked Anthropic from being used as a subcontractor in covered settings. The second, 41 U.S.C. § 4713 (part of the Federal Acquisition Supply Chain Security Act, FASCSA), is broader and remains in effect. Judge Lin’s preliminary injunction blocked the § 3252 designation, the Presidential Directive, and the Hegseth Directive. The § 4713 FASCSA designation survives that ruling and is being challenged in the D.C. Circuit under that statute’s exclusive judicial review provision.
The operational mechanism for § 4713 is FAR 52.204-30, which prohibits using products or services from a designated source as part of the performance of a covered contract, regardless of whether those products are delivered to the government. Contractors holding DoD contracts containing this clause face affirmative compliance obligations: reasonable inquiry into use of designated source products, reporting to the contracting officer within three business days, and submission of mitigation plans within ten. The scope of § 4713 extends beyond formal subcontracting relationships. An aggressive interpretation of the “as part of the performance” standard may capture uses of Anthropic products, such as using Claude Code under a general license to write software that ends up in a DoW system, that the narrower § 3252 framework would not reach.
Look at who showed up for Glasswing and who did not.
These are not oversights. Microsoft, Apple, Google, AWS, Nvidia, Cisco, Broadcom, the Linux Foundation, and more than forty other organizations — covering compute, cloud, device security, networking hardware, enterprise cybersecurity, financial infrastructure, and open-source governance. This is a deliberate structural encirclement. Anthropic did not build a partnership. It built a chokepoint.
Look at who is notably absent. Meta. OpenAI. Oracle. The three companies with the most direct competitive overlap with Anthropic’s commercial AI position. Their absence from Glasswing is by design. Anthropic built a consortium out of the companies whose own security stacks would be weakened by the blacklist’s enforcement, not the companies that might compete with it directly.
Notice what this means for the political economy of the ban. JPMorgan Chase alone spends over $10 million annually on federal lobbying. Cisco, Broadcom, Palo Alto Networks, and the major cloud providers maintain substantial lobbying presence. These companies are now operating a consortium the US government considers a threat vector. The natural equilibrium is pressure on the government to formally distinguish between Anthropic’s commercial AI products and whatever the DoD actually objects to. Or acceptance that the whitelist is too economically and politically important to sever.
The government created a blacklist. The commercial sector responded with a whitelist. The whitelist won.
The exclusion creates a gap in access to what is widely considered the most alignment-conscious frontier model available. The alternatives are real: OpenAI’s GPT-5 class via Azure, Meta’s open-weight LLaMA models, defense-native integration layers from Palantir and Anduril.
Anthropic’s Constitutional AI process produces models with measurably stronger adversarial robustness than the alternatives. Chinese AI labs have documented this, DeepSeek’s R series is built from publicly available documentation on that process, distilled from outputs generated by Anthropic’s own models. The gap is not architectural secrecy. It is the quality of the Constitutional AI process and the scale of the red-teaming investment. Nobody in the public record will put a number on the capability differential. That silence is the number. The policy logic runs backwards: the exclusion punishes the company with the most rigorous alignment process while adversaries reverse-engineer its methods from the public record.
Anthropic filed in two federal courts. In the Northern District of California (*Anthropic PBC v. U.S. Department of War*, No. 3:26-cv-01996), the court issued a preliminary injunction March 26, 2026. The DOJ filed its appeal to the Ninth Circuit April 2, 2026. In the D.C. Circuit (*Anthropic PBC v. U.S. Department of War*, No. 26-1049), Anthropic’s emergency motion to stay the § 4713 FASCSA designation was briefed March 23 and remains pending. Two courts, two statutory tracks, both live.
The district court apparently credited the First Amendment retaliation argument. The claim: DoD acted against Anthropic in part because of the company’s protected speech, including public statements by Dario Amodei on AI safety, the company’s refusal to build autonomous weapons systems, and advocacy for mandatory safety testing. If the court found a likelihood of success on the motivating-factor prong under *Nieves v. Bartlett*, the preliminary injunction reflects a serious legal theory, not a procedural punt.
The DOJ’s counterargument leans on national security deference. Courts have historically granted the political branches extraordinary latitude in procurement determinations. The government will argue this is committed to agency discretion, not subject to APA review, and that a procurement blacklist is not a regulatory action that requires notice and comment.
The tension here is structural. The DOJ’s strongest point is also its weakest: if the designation was made without a formal record, without a hearing, and without a published standard, the arbitrary-and-capricious argument is also strong. The national security deference argument works when there is a clear national security judgment visible in the record. It works poorly when the record is classified, the process was informal, and the standard applied was never written down.
One specific thing that could change: require the DoD to publish a formal, reviewable standard for supply chain risk designations of frontier AI companies, explicit criteria, a defined process, and a mechanism for the designated company to respond before the designation takes effect. The process that produced this designation had none of those things. That absence is why the “arbitrary-and-capricious argument” is strong. A published standard would not guarantee fair outcomes. It would make the process auditable and the designation reviewable, the minimum due process a company designated as a national security threat ought to have. What this does not solve: whether the underlying policy of excluding alignment-focused AI companies from defense work is wise, which is a different question entirely.
The Ninth Circuit has not yet set a briefing schedule. The preliminary injunction remains in effect pending appeal. The outcome will depend heavily on how much classified information the government seeks to introduce and whether the court views the designation as a procurement decision or a regulatory action that triggers ordinary administrative law protections.
The offshore scenario here is analytical inference I am making, not sourced reporting. No established press account confirms Anthropic is actively pursuing a restructuring. But with the United States currently providing an unstable environment, what follows is a mechanical map of what that path would require.
Anthropic could in theory restructure to UK jurisdiction. The UK has mandatory pre-deployment safety testing with inspection rights for the AI Safety Institute. This is precisely the regulatory environment Anthropic has publicly advocated for. Moving there would put the company inside a framework it helped design conceptually.
The mechanical requirements are substantial. A new UK legal entity with HMRC tax residency determination. Transfer pricing agreements between US parent and UK entity. US export controls on advanced AI models and training hardware would still constrain what a UK entity could independently develop. CFIUS review would likely be required given Anthropic’s access to sensitive training data and US government partnerships. Investor consent from Alphabet, Salesforce Ventures, and other major US investors would be needed for any material restructure.
If restructuring were genuinely pursued, it would take eighteen to twenty-four months and significant legal maneuvering. The current administration could actively obstruct the process at multiple points. The offshore option is best understood as a negotiating lever, not a concrete plan. The fact that it is theoretically available is itself a pressure point.
The supply chain risk designation was a policy designed for a simpler entity. Anthropic in 2024 was a frontier AI company with strong safety credentials and government contracts. Anthropic in 2026 is the operational center of a cybersecurity consortium that includes the companies the DoD relies on for its own supply chain integrity.
You cannot costlessly exclude what the system requires. That is the law this situation is demonstrating. The blacklist exists. The ban is technically in effect. The enforcement mechanism runs into the structural reality that Anthropic has been built into the security stack of every significant commercial technology company that also sells to the Pentagon. The DoD would need to sever ties with a wide range of vendors to actually enforce the exclusion.
The most recent data point is the consortium itself. Glasswing was announced two weeks after the preliminary injunction. Anthropic did not argue with the government. It built infrastructure that makes the government the outlier. The companies that joined represent a commercial response that makes the legal fight secondary. The government can win in court and still find that the product flows have moved outside its reach.
What happens when the blacklist becomes the thing that is hardest to enforce?
- “DoW and Anthropic showdown continues — navigating the Anthropic supply chain risk designations,” A&O Shearman, March 27, 2026 — confirmed two-track statutory basis (10 U.S.C. § 3252 and 41 U.S.C. § 4713/FASCSA), preliminary injunction scope, D.C. Circuit proceeding
- Project Glasswing consortium announced April 7, 2026 — Anthropic news page
- *Anthropic PBC v. U.S. Department of War*, No. 3:26-cv-01996 (N.D. Cal.) — preliminary injunction March 26, 2026, Judge Rita F. Lin
- *Anthropic PBC v. U.S. Department of War*, No. 26-1049 (D.C. Cir.) — FASCSA § 4713 proceeding, emergency stay motion briefed March 23, 2026, pending
- DOJ Ninth Circuit appeal filed April 2, 2026 — *Anthropic PBC v. U.S. Department of War* (N.D. Cal. appeal)
- *Nieves v. Bartlett*, 139 S. Ct. 1715 (2019) — Supreme Court of the United States
- 10 U.S.C. § 3252 — DoD subcontractor supply chain risk designation authority
- 41 U.S.C. § 4713 / FASCSA — broader acquisition supply chain risk authority; FAR 52.204-30 operationalizes compliance obligations
- FAR 52.204-30 — “Prohibition on Use of Certain Covered Articles” — the actual contract clause implementing § 4713
- Anthropic Constitutional AI framework — Bai et al. (2022), “Constitutional AI: Harmlessness from AI Feedback,” Anthropic Publications
- JPMorgan Chase federal lobbying expenditure — OpenSecrets.org (figure widely reported in financial press; direct URL pending manual verification: https://www.opensecrets.org/federal-lobbying/clients/summary?part=c&id=D000028115)




