Pentagon used Anthropic's Claude during Maduro raid

4 min read Original article ↗

The U.S. military used Anthropic's Claude AI model during the operation to capture Venezuela's Nicolás Maduro, two sources with knowledge of the situation told Axios.

The latest: After reports on the use of Claude in the raid, a senior administration official told Axios that the Pentagon would be reevaluating its partnership with Anthropic.

Why it matters: The episode highlights the tensions the major AI labs face, as they enter into business with the military while trying to maintain some limitations on how their tools are used.

Breaking it down: AI models can quickly process data in real-time, a capability prized by the Pentagon given the chaotic environments in which military operations take place.

Friction point: The Pentagon wants the AI giants to allow them to use their models in any scenario so long as they comply with the law.

What they're saying: "We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," the Anthropic spokesperson said.

The big picture: Anthropic is one of several major model-makers that are working with the Pentagon in various capacities.

What to watch: Discussions are ongoing between the Pentagon and OpenAI, Google and xAI about allowing the use of their tools in classified systems. Anthropic and the Pentagon are also in discussions about potentially loosening the restrictions on Claude.

Editor's note: The headline and story were updated based on comments from a senior U.S. official.