Pentagon approves OpenAI safety red lines after dumping Anthropic

3 min read Original article ↗

Sam Altman said late Friday night that his company reached an agreement with the Pentagon to use its AI models, after the Defense Department agreed to its safety red lines that were similar to rival Anthropic's.

Why it matters: The Pentagon has blasted Anthropic for days, contending its red lines for AI use in the military — mass surveillance and autonomous weapons — are philosophical and "woke."

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said.

Altman said he wants the terms to be extended to all the AI labs.

Between the lines: Altman in his statement acknowledges mass surveillance is illegal and the Pentagon will comply with applicable law.

Catch up quick: Altman, in an overnight memo Friday to employees, laid out his company's approach, which has now been approved.

Defense officials and President Trump were irate at the idea that Anthropic — a company they perceive as leftist — and CEO Dario Amodei could have any say over how the Pentagon uses technology in its operations.

What they're saying: "This is a case where it's important to me that we do the right thing, not the easy thing that looks strong but is disingenuous," Altman said in the memo.

The bottom line: This could end up a win for OpenAI, which has managed to stay off the administration's bad side — at least politically.

Editor's note: This story has been updated with a later post from Sam Altman.