Settings

Theme

US Military leaders meet with Anthropic to argue against Claude safeguards

theguardian.com

185 points by KnuthIsGod 18 hours ago · 94 comments

Reader

cyrusradfar 14 hours ago

Something is deeply troubling when a company proclaims: "We want to protect people" and the government response is "we can't work with you"

The fact that there are countless use cases for real government efficiency to help the people they would sacrifice because Anthropic wanted to refuse killer robots is baffling.

  • davidguetta 2 hours ago

    The military is about killing people tho

    • vharuck 30 minutes ago

      Isn't there a saying about the US military being a logistics firm that sometimes carries guns? There are a lot of military activities that don't involve violence.

      • array_key_first 6 minutes ago

        But all the activities pursue the threat of violence, although almost all of it never materializes. It's logistics to build things so you might do violence.

  • Maxious 12 hours ago

    Note that the threat in the Axios reporting OP is based on is no longer "we can't work with you" but now "invoke the Defense Production Act to force the company to tailor its model to the military's needs"

    On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."

    https://www.axios.com/2026/02/24/anthropic-pentagon-claude-h...

    • nullocator 4 hours ago

      These AI companies and billionaires won't learn any lessons, but I hope the likes of Marc Andreessen stop getting treated intelligent well reasoned actors in the media and on podcasts when they bitched and moaned about the Biden administration overstepping. If someone thinks the pragmatic approach to resisting reporting requirements and export controls is to cozy up with the devil who will force you into worse capitulations or just seize your whole company, then that someone (looking at you Marc) is a fucking moron.

  • Spivak 13 hours ago

    In a way its a testament that the safeguards are working for someone because it seems like the internet at large is full of bypasses.

BLKNSLVR 15 hours ago

More government intervention in private enterprise? This pattern seems to be gathering steam, does that mean they're now subscribing to this model?

Or is this just par for the course and has always been going on, it's just the reporting is different, or the current context makes it more of a sensitive topic?

  • tototrains 15 hours ago

    No, this is very unusual. The US government taking a 10% stake in intel is very unsual.

    There have been a few cases where national security has prompted the government to nationalize private institutions: the Railroads in WWI, steel mills in the korean war, CINB which was deemed a security risk by being too large a bank.

    This admin has so far acted like a kleptocracy and, like, because of the Epstein files if they lose power many will go to jail, so there's a huge incentive to remain in power.

    Wars are good for remaining in power. Dictatorship is good for remaining in power.

    This is all very, very, very unusual in US history (except maybe when businesses tried to overthrow the government in the 30s but we don't talk about that).

    • watwut 8 hours ago

      > This admin has so far acted like a kleptocracy and, like, because of the Epstein files if they lose power many will go to jail, so there's a huge incentive to remain in power.

      I find this to be unrealistic worry. Just like with mee too, the perpetrators will eventually be protected. Just like with any previous abuses, including war crimes and so on, high level people will be protected first, celebrated second and then we will collectively move onto pretending they were being treated unjustly the whole time.

      The amount of people who think that the real victims of abuse are perpetrators and real wrongdoers are victims who talk about it is just too high. It is rarely openly framed or phrased this way, the used words are always nicer, but this is the overall theme of the things.

    • 5o1ecist 14 hours ago

      > (except maybe when businesses tried to overthrow the government in the 30s but we don't talk about that)

      That doesn't feel familiar at all! This clearly is just yet another wrong, completely bonkers conspiracy-theory! Just like all the others! No cheese pizza eating billionaires would ever even think of this!

    • JasonADrury 6 hours ago

      >This admin has so far acted like a kleptocracy and, like, because of the Epstein files if they lose power many will go to jail, so there's a huge incentive to remain in power.

      This is a bold claim that requires some evidence to accompany it.

      So far there's been very little in the Epstein files to implicate anyone of consequence in any criminal activities.

      When the rare documents that actually did offer evidence of potentially criminal behavior surfaced, Andrew and Mandelson were swiftly arrested. We can see that the evidence is being acted upon, it's just not very exciting.

      • JKCalhoun 4 hours ago

        > So far there's been very little in the Epstein files…

        Numbers get thrown around, some suggesting only 2% of the files have been released.

        I'm confident that even if 99% of the files were eventually released that the last 1% held back are far and away the most damning.

        • JasonADrury 4 hours ago

          Possible, but we don't really have any genuinely convincing reason to believe that there are any particularly damning (in terms of criminal conduct) files there.

          • text0404 an hour ago

            There are many genuinely convincing reasons to believe that.

            The simplest is Trump administration aggressively asserting the importance and impact of Epstein's network, followed by excuses to downplay the impact and prevent release of the files (going as far as calling it a hoax, claiming Trump was an FBI informant, only investigating Democrats). These contradictory deflections are genuinely convincing reasons to believe that there is more damning evidence which they're trying to cover up.

            More convincing reasons that there is further evidence of crimes in the unreleased files:

            - Witness/survivor testimony. Many victims have come forward, several naming Trump officials directly.

            - Epstein ran a sex-trafficking network which is thoroughly documented in the released files. Over half the files have not yet been released.

            - Many questionable/excessive redactions which US lawmakers have called "inappropriate". US lawmakers have also said that removal of certain files is illegal under the Epstein Act.

            - Epstein's extensive political, financial, and legal networks include lots of high-profile figures which have already caused confessions, firings, resignations, and arrests.

            There are a lot more reasons, this was just off the top of my head.

  • dillona 14 hours ago

    Yes, the government pays (lots of money) for Claude Gov that they use on their networks.

    In my experience they very much do not want to be told what they can and can not do with the things they purchase. I’m surprised the deal got done at all with these restrictions in place.

    • BLKNSLVR 14 hours ago

      Purchasing a service is different from purchasing the company, though.

      As such I agree with the surprise at the deal getting done at all.

SoftTalker 15 hours ago

I love watching the plot lines of The Terminator play out in real life.

  • mbxy 14 hours ago

    Isn't it neat.. I mean stupid.

    I saw a quote today from Vonnegut: "We’ll go down in history as the first society that wouldn’t save itself because it wasn’t cost-effective."

    • juleiie 11 hours ago

      Saving yourself is always cost effective…

      In fact no matter the costs, the cost of not saving yourself is infinite.

      • dtj1123 5 hours ago

        Explaining this to fossil fuel advocates is surprisingly challenging.

      • reverius42 6 hours ago

        Rationally, yes. Humans are too often irrational.

        • juleiie 6 hours ago

          No, I believe that money successful humans as single units are extremely rational and cold calculating.

          The problem is that this rationality is often centered on a single beneficiary (You) because why would you care about any other beneficiary?

          However times and times again it turns out that no company is as evil as government. Hence I am an anarcho capitalist.

          On the whole even with every company only thinking about themselves, it is a distributed system self sustaining and self correcting. No single unit has unlimited power.

          Historically it’s always the governments that are vastly more evil and chaotic than any private enterprise ever conceived.

          And so we can see it now as another example from USA government. No company could ever get so corrupt and evil as current American elected officials.

          • JKCalhoun 4 hours ago

            You seem to tacitly acknowledge corporate America can also be evil, just not as evil as government can be? Why put corporate America on a pedestal at all then? Why content yourself with what you consider the lesser of two evils?

            Demand accountability from your elected officials. It can be done by not electing them. You have no such agency over corporate America (short of boycotting, I suppose).

            To my eye the U.S's highest elected official is in fact also a company.

hansvm 15 hours ago

It's been all of 3 days since Claude decided to delete a large chunk of my codebase as part of implementing a feature (couldn't get it to work, so it deleted everything triggering errors). I think Anthropic is right to hold the line on not letting the current generation delete people.

  • notepad0x90 12 hours ago

    You didn't use git with a remote repo? or did it somehow delete the repos, or perhaps you didn't commit and checkout into a feature branch before it ran?

  • AlexCoventry 14 hours ago

    I'm not blaming you, but it's scary how many people are running these agents as if they were trusted entities.

    • hansvm 6 hours ago
    • notepad0x90 12 hours ago

      they're tools, you don't ascribe trust to them. you trust or distrust the user of the tool. It's like say you trust your terminal emulator. And from my experience, they will ask for permission over a directory before running. I would love to know how people are having this happen to them. If you tell it it can make changes to a directory, you've given it every right to destroy anything in that directory. I haven't heard of people claiming it exceeded those boundaries and started messing with things it wasn't permitted to mess with to begin with.

      • fragmede 8 hours ago

        That would be --dangerously-skip-permissions for Claude, and --dangerously-skip-permissions for codex.

        Aka yolo mode. And yes, people (me) are stupid enough to actually use that.

        • notepad0x90 3 hours ago

          It's a people problem then. not blaming here, I'm just saying it isn't the tool being untrustworthy. I too get burned badly when I play with fire.

      • AnimalMuppet 4 hours ago

        OK, but we learned decades ago about putting safety guards on dangerous machinery, as part of the machinery. Sure, you can run LLMs in a sandbox, but that's a separate step, rather than part of the machinery.

        What we need is for the LLM to do the sandboxing... if we could trust it to always do it.

        • notepad0x90 3 hours ago

          Again, the trust is for the human/self. it's auto-complete, it hallucinates and commits errors, that's the nature of the tool. It's for the tools users to put approprite safeguards around it. Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it. You're expecting a dumb tool to be smart and know better. I suspect that is because of the "AI" marketing term and the whole supposition that it is some sort of pseudo-intelligence. it's just auto-complete. When you have it run code in an environment, it could auto-complete 'rm -rf /'.

          • AnimalMuppet an hour ago

            > Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it.

            True. But I expect my furnace to be trustworthy to not burn my house down. I expect my circular saw to come with a blade guard. I expect my chainsaw to come with an auto-stop.

            But you are correct that in the AI area, that's not the kind of tool we have today. We have dangerous tools, non-OSHA-approved tools, tools that will hurt you if you aren't very careful with them. There's been all this development in making AI more powerful, and not nearly enough in ergonomics (for want of a better word).

            We need tools that actually work the way the users expect. We don't have that. (And, as you say, marketing is a big part of the problem. People might expect closer to what the tool actually does, if marketing didn't try so hard to present it as something it is not.)

  • BLKNSLVR 14 hours ago

    Unfortunately I think the 'death by algorithm' rubicon has already been crossed, even by the US.

  • petre 6 hours ago

    Hopefully they won't allow it to launch the nukes without input from the individuals in charge.

jmward01 16 hours ago

"Until this week, however, Anthropic’s Claude product was the only model permitted for use in the military’s classified systems."

I hadn't realized. This does make me consider using alternatives more.

sam0x17 12 hours ago

Anthropic winning big points with me for this one to be honest. Reminiscent of the Apple vs FBI days almost a decade ago

  • JKCalhoun 4 hours ago

    Winning. It's not over yet. And still feel out in the dark as to what is really going on in backroom. But that seems. more than it ever has, to now be a part of the society we have found ourselves in.

notepad0x90 12 hours ago

If only a time traveling robot and his human companions were to pay a visit to decision makers at claude(aka cyberdyne? :) ).

What are they using it for though? Target selection for precise strikes? I'm guessing their argument will be less lives will be lost if claude assisted with making sure the attacks were surgically precise?

  • halls-940 4 hours ago

    "Review our targeting algorithm and suggest improvements."

    Or the mass surveillance bit, "Parse this dataset and come up with a list of people whose emotions are most rapidly shifting towards violence."

  • petre 6 hours ago

    What they've always been used for:

    > Cold War computers were primarily driven by military necessity, focusing on nuclear weapon simulation, ballistic missile trajectory calculation, and cryptography to support Mutually Assured Destruction (MAD). Key uses included modeling hydrogen bomb design using Monte Carlo methods (e.g., on MANIAC), air defense systems like the Navy’s NTDS, and early AI for strategic planning.

moomin 9 hours ago

There’s a conflict here that’s nothing to do with the ethical dimension: Claude is regarded as a high quality model at least in part because its critical about what it’s doing. The military, on the other hand, doesn’t really encourage introspection. Even without ethical considerations there’s always going to be a tension between quality and obedience.

  • cucumber3732842 7 hours ago

    The military has its own mechanisms for assessing the quality of its own output. They might be imperfect, but they're there. They don't need that from claude.

    What they need is it to not say "it seems you're trying to build a weapons system, can you please not do that" when someone asks it to sanity check something that's on the edge of their technical expertise. Like making sure their proposed antenna dome is aerodynamically sane at transsonic speeds so the aero guys don't have to waste time rejecting it outright. Or they need it to not paternalistically screech about safety when someone tells it to read the commercial user manual for some piece of equipment and then append into the usage sections all the non-osha stuff the military does when things don't work quite right.

  • varispeed 9 hours ago

    No, military probably wants prompts like "how to make a missile" to be answered.

nitwit005 14 hours ago

Feels like they'll use it for purposes Anthropic didn't approve of, and then turn around and blame them when it turns out asking ChatGPT to determine which ships are hostile was a bad idea.

  • XorNot 12 hours ago

    That's exactly what's going to happen because it's already happening: companies and people leaped on "it wasn't us it was the AI" immediately.

chrischen 12 hours ago

Yesterday I was trying to figure out if my expired nacho dip would be safe to eat and wanted to know how much botulism would be toxic if I ate it and so I asked Claude. It refused to answer that question so I could see how the current safeguards can be limiting.

chid 14 hours ago

Kind of wild given the outcome appears to be https://time.com/7380854/exclusive-anthropic-drops-flagship-...

  • Sebguer 14 hours ago

    utterly unrelated, the RSP had nothing to do with their usage terms and was entirely about research and release of high-capability models.

h4kunamata 13 hours ago

Read: The USA as usual doesn't like when a company doesn't give what they want.

Awwwnnnn poor thing :)

It is like the USA big techs mad because the Chinese AI companies are stealing their data just like, wait for it, how the USA big techs stole the data from artists worldwide to train their models.

The sweet payback in the name of every single artist/company that have been affected by USA greedy.

Karma is a btch!

  • notepad0x90 12 hours ago

    That's every country in the world...

    "America bad" is no longer trendy or edgy, if you haven't heard. There is no pretense otherwise by anyone anymore.

gaigalas 15 hours ago

All of this is kind of weird.

https://www.bbc.com/news/articles/cjrq1vwe73po

> the Pentagon official told the BBC the current conflict between the agency and Anthropic is unrelated to the use of autonomous weapons or mass surveillance.

> The official added that the Pentagon would simultaneously label Anthropic as a supply chain risk.

*Supply chain risk*?

The BBC article seems to imply that the government wants to audit Anthropic.

This, coming at the same time those "distillation" claims were published, is all incredibly suspicious.

  • thephyber 15 hours ago

    All of the coverage of this is about the negotiation points of Anthropic vs Pentagon.

    Anthropic doesn’t want their software used for certain purposes, so they maintain approval/denial of projects and actions. I suspect the Pentagon doesn’t want limitations AND they dislike paying for software/service which can be withheld from them if they are found to be skirting the contractual terms.

    And THAT is why the Pentagon is using maximum leverage (threatening Anthropic as a supply chain risk label).

    • gaigalas 12 hours ago

      > Anthropic doesn’t want their software used for certain purposes

      How do you know the government asked for a specific use case?

      As far as I know, the meeting was private and we don't know what they talked about. I haven't found a single official press release or verified statement that supports this.

      The verified statements I found are just about the government wanting unrestricted access. That alone is not enough to imply "no guardrails". As I mentioned before, it could be just for auditing (especially in the light of current events involving distilling of the models).

      I think it's an extraordinary coincidence that this happened soon after the distillation thing. And I don't know what it means if it's not a coincidence.

  • hn_throwaway_99 15 hours ago

    Supply chain risk is a very specific designation, meaning not only would Anthropic lose Pentagon contracts, but no other company with Pentagon contracts would be allowed to use them either. It would have the effect of being a near industry-wide blackballing of Anthropic given all the major companies that have contracts with the DoD.

    • gaigalas 15 hours ago

      Yes. Incredible, isn't it? I'm curious at what would make the government do that.

      • thephyber 15 hours ago

        _The Art of the Deal_.

        The US federal government is no longer a good faith actor acting on behalf of American citizens and following US law, but now an autonomous corporation aiming to “get the best deal” via maximum leverage.

        • gaigalas 6 hours ago

          Dario Amodei used almost every single public interview he gave to press on the "Protect America, it's a matter of national security. Ban Chinese exports, etc".

          He was clearly dancing to the DoD tune, like he REALLY wanted a DoD contract, which he eventually got.

          But that's not the point. I'm talking about how coincidental all of this is with the recent Anthropic blunder with the distillation thing. That is my main point you dismissed.

      • AnimalMuppet 3 hours ago

        I can think of at least one possibility - confidentiality failure. If the customer data was not contained - especially if it was DoD data - that would be reason to do such a thing.

      • pjc50 7 hours ago

        They thought it would work, and it did?

yanhangyhy 12 hours ago

person of intreset... who is gonna build the 'machine'

raincole 11 hours ago

> Both xAI and OpenAI have agreed to the government’s terms on the uses of their AI,

Uh... so why doesn't the US government simply work with OpenAI and xAI? Why do they have to use Claude?

teh_infallible 11 hours ago

It seems odd to me that the military doesn’t already have far superior models.

KnuthIsGodOP 14 hours ago

Claude is now the official LLM for Sauron and his killers.

ChrisArchitect 15 hours ago

[dupe] https://news.ycombinator.com/item?id=47140734

https://news.ycombinator.com/item?id=47142587

trlakh 14 hours ago

As long as The Boring Company can drill a private Mount Cheyenne bunker in some granite mountain for the billionaires and a new bunker is constructed under the Silicon Valley financed White House ballroom for the politicians, everything is just fine.

Hegseth and Rubio already live on a military base because they are afraid.

  • XorNot 12 hours ago

    I guess that's fortunate then because the Boring Company notably really sucks at drilling anything.

SpicyLemonZest 16 hours ago

It's inexcusable that the AI companies have not formed a united front against this. I've been skeptical of the idea that OpenAI leadership is outright MAGA, but even pure self-interest does not explain staying silent while the Pentagon demands autonomous killbots.

  • Sebguer 16 hours ago

    Brockman donated 25,000,000 dollars to the MAGA superpac, how much more 'outright' would you like him to be, haha.

    • LarsDu88 15 hours ago

      This is not only a big donation. It is actually the BIGGEST donation by any single individual.

    • cyanydeez 15 hours ago

      Shareholder value and MAGA value are a venn diagram of optical illusion.

    • SpicyLemonZest 16 hours ago

      He claimed, and until today I was willing to give him the benefit of the doubt, that he was trying to curry favor with a notoriously bribe-able President. Not exactly a paragon of moral virtue, but I wouldn't be able to do business with nearly any company in the US if I made that a dealbreaker. This clears the bar where I'm willing to cut ties and demand that everyone else do the same.

      • _aavaa_ 15 hours ago

        We must join with him, we must join with Sauron.

        • tototrains 15 hours ago

          Sauron might win, don't want to risk being on the wrong side of the post-apocalypse

          • BLKNSLVR 14 hours ago

            Just because you're on Sauron's side when it wins, doesn't mean you'll be on Sauron's side at any other point in the future.

            One of the things I find interesting about classifying literally any kind of trait within bounds of 'normality', and the culling / suppressing / discouragement of anything outside of that definition, is that there will just be new 'edges', and in short order these edges will be 'other', suddenly outside the definition because times are bad and it has to be someone's fault.

            And so an ad infinitum until the single supreme ruler is the one entity representative of normal, atop a mountain of dead abominations.

            • 5o1ecist 14 hours ago

              > One of the things I find interesting about classifying literally any kind of trait within bounds of 'normality', and the culling / suppressing / discouragement of anything outside of that definition, is that there will just be new 'edges'

              There is a general rule I've discovered many years ago, through playing EVE ONLINE and learning understanding how society works. Not the modern EVE ONLINE, the old EVE ONLINE. It was really good for that.

              Every new generation grows up with a new norm. Whenever hardship or challenges are being removed, then the new generation, having never needed to learn how to deal with them, will have a lower tolerance of them in general.

              Your "new edges" generally aren't actually new. They've always been there. It was just that nobody really cared, because they weren't the end of the world: People knew worse.

              It's a self-destructive downwards spiral.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection