Settings

Theme

Anthropic vs. DoD: "Any lawful use" is a fight about control

2 points by colek42 22 days ago · 8 comments · 2 min read


I served 12 years infantry, then built targeting tools at JSOC vs ISIS. Now I lead a team building AI tools automating the compliance process. I’ve got opinions on Anthropic + DoD

When people argue about “AI in weapons” like it’s a sci-fi trigger bot… I can’t take it seriously.

A “kill chain” isn’t a vibe. It’s a process

Find, Fix, Track, Target, Engage, Assess (F2T2EA) and most of it is information work: sorting signal from noise, building confidence, tightening timelines, and getting decisions to the right humans fast enough to matter.

That’s why this Anthropic vs. DoD fight is getting attention. It’s not just “ethics.”

-> It’s about control.

Here’s what’s actually on the table:

Anthropic says they’ll support the military — but they want two carve-outs: no mass domestic surveillance and no fully autonomous weapons (their definition: systems that “take humans out of the loop entirely” and automate selecting/engaging targets).

Anthropic also says DoD demanded “any lawful use” and threatened offboarding / “supply chain risk” pressure if they didn’t comply.

A DoD memo posted on media.defense.gov explicitly calls for models “free from usage policy constraints” and directs adding standard “any lawful use” language into AI contracts.

The dispute escalated fast — including federal offboarding/blacklist actions and a “supply chain risk” designation as reported by major outlets. Now my take, as someone who’s lived inside the targeting reality:

AI can absolutely help the kill chain without ever being the one “pulling the trigger.”

Speeding up Find/Fix/Track/Target changes outcomes — and it’s not hypothetical.

But if we’re going to talk about “any lawful use,” then stop outsourcing national policy to contract fights.

DoD already has policy that autonomous weapon systems should allow appropriate human judgment over the use of force. So the real question isn’t whether humans matter.

It’s this:

Do we want safety and governance implemented at the model layer (vendor guardrails), the contract layer (“any lawful use”), or the law/policy layer (Congress + DoD doctrine + auditing)?

Because “Terms of Service vs. warfighting” is a stupid place to settle a question this big.

If you’ve worked in intel, targeting, acquisition, or governance:

Where should the boundary live? model, contract, or law, and who owns accountability when it breaks?

tacostakohashi 22 days ago

I think it's good for this boundary to live in multiple places.

haute_cuisine 22 days ago

Snowden already showed what lawful use actually means.

OgsyedIE 22 days ago

GPT, the author of this piece, did not in fact serve 12 years in the infantry.

https://en.wikipedia.org/wiki/lying

  • colek42OP 22 days ago

    That is quite an ignorant statement to make. I spent three years in combat, and am permanently disabled from my service.

    • OgsyedIE 22 days ago

      I have no doubt of your own experiences and losses, but you should be the author of your own words instead of outsourcing them.

      • colek42OP 22 days ago

        My job is to communicate quickly and clearly, AI helps me do my job faster and more efficiently. But thanks for telling me how I should do my job. You come off as both ignorant and arrogant.

        • OgsyedIE 22 days ago

          Hey, you can use all the AI you like, I just object to you telling them to use the words "I" and "me" when they speak on your behalf.

          They didn't serve.

          • colek42OP 21 days ago

            Where is your line, copy editing, drafting, reorganizing? You are going to have a busy, boring, and angry life if you want to comment on every post that has signs AI touched it.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection