Settings

Theme

Nuclear War: An LLM Scenario

chrisclapham.com

37 points by huey77 16 days ago · 29 comments

Reader

roxolotl 16 days ago

We don’t need agi or superintelligence for these things to be dangerous. We just need to be willing to hand over our decision making to a machine.

And of course a human can make a wrong call too. In this scenario that’s what is happening. And of course we should bring all of our tools to bear when it comes to evaluating nuclear threats.

But that doesn’t make it less concerning that we’ve now got machines capable of linguistic persuasion in that toolset.

  • asah 16 days ago

    "hand over" is a misnomer - what actually happens is that there's an interaction with a machine and people either trust it too much, or forget that it's a machine (i.e. handed from one person to another and the "AI warning" label is accidentally or intentionally ripped off)

motbus3 16 days ago

This is not unlikely. This is actually likely. The instructions for those agents is to find signals that prove there is an attack. Llms are steered to do what they are requested. They will interpret the signals a strongly as possible. They will omit counter evidence to achieve their objective. They will distort analysis to find their objective.

This has been everyone's llm problem daily. How is not that clear yet?

  • chuckadams 16 days ago

    I don't disagree, but just to play devils advocate: the LLM can also be told to look for counter-evidence, and will at least make a stab at doing so. That's more than we can expect from the humans currently in charge.

laughingcurve 16 days ago

https://arxiv.org/abs/2509.17192

Shall We Play a Game? Language Models for Open-ended Wargames

Wargames are simulations of conflicts in which participants' decisions influence future events. While casual wargaming can be used for entertainment or socialization, serious wargaming is used by experts to explore strategic implications of decision-making and experiential learning. In this paper, we take the position that Artificial Intelligence (AI) systems, such as Language Models (LMs), are rapidly approaching human-expert capability for strategic planning -- and will one day surpass it. Military organizations have begun using LMs to provide insights into the consequences of real-world decisions during _open-ended wargames_ which use natural language to convey actions and outcomes. We argue the ability for AI systems to influence large-scale decisions motivates additional research into the safety, interpretability, and explainability of AI in open-ended wargames. To demonstrate, we conduct a scoping literature review with a curated selection of 100 unclassified studies on AI in wargames, and construct a novel ontology of open-endedness using the creativity afforded to players, adjudicators, and the novelty provided to observers. Drawing from this body of work, we distill a set of practical recommendations and critical safety considerations for deploying AI in open-ended wargames across common domains. We conclude by presenting the community with a set of high-impact open research challenges for future work

Simulacra 16 days ago

Reading this I was reminded of this story:

"At the Abyss: An Insider's History of the Cold War" recounted that the United States added a Trojan horse to gas pipeline control software that the Soviet Union obtained from a company in Canada. According to the author, when the components were deployed on a Trans-Siberian gas pipeline, the Trojan horse led to a huge explosion. He wrote: "The pipeline software that was to run the pumps, turbines and valves was programmed to go haywire, to reset pump speeds and valve settings to produce pressures far beyond those acceptable to the pipeline joints and welds. The result was the most monumental non-nuclear explosion and fire ever seen from space."

https://en.wikipedia.org/wiki/At_the_Abyss

chuckadams 16 days ago

Would you like to play a game?

rglover 16 days ago

The big problem here is determining how vigilant those in command are about vetting the AI's responses. This feels like one of those systems that works great until someone vaporizes a hallucinated target that was actually civilians or unintended targets. This should be mitigated by having a MITM, but still. Risky. Humans make mistakes, too, and they're inclined to just "believe what the computer says," so as much as I'd love to believe this ends with a white picket fence scene, my instincts are screaming "dig a bunker, homie."

adrianmsmith 16 days ago

> Replacing human hesitation with machine confidence removes the one safeguard that has prevented nuclear war since 1945. Until militaries implement documented human authorisation...we are blindly automating our own destruction

In the scenario described there literally is a human in the loop: the president is a human?

nivertech 14 days ago

https://x.com/nivertech/status/2028456969330192386?s=46&t=2P...

__lain__ 16 days ago

Every once in a while I'll send a false positive security alert to Claude, one that isn't even very subtle its just obviously incorrectly flagged, and every time it freaks out and tells me I have an active intruder and it actually gets itself worked up in a panic.

I have high hopes for our future.

thomascountz 16 days ago

See: https://news.ycombinator.com/item?id=47248385

Anthropic's AI tool Claude central to U.S. campaign in Iran...

627467 12 days ago

Damn, so House of Dynamite won't ever happen?

  • sigwinch 12 days ago

    You could make it a little more convincing:

    1. An uncertain Pacific launch seemed like a weak point. Instead, an arctic launch. Because volcanology studies things like CO2, maybe we fired the climate scientists who could tell whether a thermal bloom in the arctic could be volcanic instead of missile.

    2. Make the President more contemporary. Have his phone full of spyware. Have him continually screen calls from his kids, influencers, and other seekers of patronage. Have him dodge the hovering sycophants and pardon-seeking lawyers. Demonstrate what he’s actually doing instead of preparing for the hardest part of the job.

    3. Acknowledge that we have limited interceptors and that perfectly deploying them means that the first and last missiles get through.

user2722 16 days ago

No-one got fired for ~buying IBM~ following a statistical-based text output.

mdlxxv 16 days ago

A strange game. The only winning move is not to play.

How about a nice game of chess?

twoodfin 16 days ago

Since the beginning of the nuclear age, literally billions of dollars have been spent paying incredibly smart people to model all aspects of nuclear war, including the chain of escalation under uncertainty.

Not to discount the importance of this risk, but we’re not likely to sleepwalk into it, barring a collapse in strategic & operational competence in planning (yeah, yeah) that would make MANY risks dangerously severe.

  • collingreen 16 days ago

    There are several examples already of the modeling leading to systems that all incorrectly handled faults and pointed toward nuclear war as the correct next action. Each of these times _so far_ a human has gone against the strategic planning and operational competence you're talking about and decided personally to get more information before killing millions of people (and they were all correct so far!)

    Diluting or delegating decision making to committees, processes, models, or AI all have essentially the same shape.

    We can either appreciate how lucky we've been so far and actually learn from these near-doomsdays or we can choose to keep rolling the dice with our eyes covered.

  • pkaral 15 days ago

    There are so many implicit premises in that short comment, such as:

    - Incredibly smart people are always right when it comes to extremely complex systems involving both deterministic behavior and human psychology - The people with the nuclear codes will be given their orders by (or themselves be) incredibly smart people - Wargames work (they have a horrible track record) - The best plans are based on a complete understanding of the starting conditions and the factors that influence the modelling (including the "unknown unknowns")

    I could go on.

    • sigwinch 12 days ago

      My experience is that wargaming has a decent track record. In hindsight, Nimitz looks much more prepared than MacArthur, even if their early careers suggest the opposite.

    • twoodfin 15 days ago

      I’m not sure what you’re objecting to about my comment, except a bunch of “implicit premises” you read into it.

burnt-resistor 16 days ago

How about a nice game of chess?

user2722 16 days ago

I'd posit the faster we feed LLM exhisting nuclear crisis and invented, dissimilar to its training corpus, nuclear scenarios, the better we will know how wrong they can be. Fear-mongering isn't lucrative, isn't dopamine triggering, isn't actionable, doesn't look good on the resume, so it's tipically ignored.

  • itintheory 16 days ago

    > Fear-mongering isn't lucrative, isn't dopamine triggering

    Isn't it? Isn't fear-mongering one of the main selling points for news-media? And a driving factor of engagement in social media?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection