Anthropic’s ties to the Pentagon came under scrutiny following a report that its AI model Claude was used by the U.S. military during a January operation to capture former Venezuelan President Nicolás Maduro.
The company refuses to allow its products to be used for mass domestic surveillance of U.S. citizens or in physical attacks where AI makes targeting decisions without human input — two red lines that Amodei reiterated to Hegseth on Tuesday, per the person familiar with their conversation.
In a statement, Anthropic spokesperson Maya Humes confirmed the Tuesday meeting between Hegseth and Amodei. She said the two men “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
A senior Pentagon official told POLITICO that Anthropic has until 5:01pm on Friday before the DoD invokes the Defense Production Act on the company in a bid to compel the use of its models. The official said Hegseth will also label Anthropic a supply chain risk at that time. The official added that the dispute has nothing to do with mass surveillance or autonomous weapon use and that the Pentagon follows the law.
The Defense Production Act gives the government broad powers over private companies in times of war or national emergency, and was used during the Covid-19 pandemic to compel production of medical supplies and accelerate vaccine production.
In 2024, some conservatives objected to the Biden administration’s use of it to force AI companies to share data on cutting-edge models with the government.
It was not immediately clear how the Pentagon intends to label Anthropic a supply chain risk — which typically requires the government and its contractors to cut ties with that company — while simultaneously invoking the Defense Production Act to compel the company to cooperate with the Pentagon.
Audrey Decker contributed to this report.