Did AI Misidentify the Minab School?

6 min read Original article ↗

Recent reports about the military operations in Venezuela and Iran show that the usage of AI is taking off for military purposes. In this article, I will discuss how large language models such as Claude are used for military needs. I hope that this article will give a clear understanding that AI can and is used to scale the war.

Gatekeeping a technology in the way the EU AI Act is doing it is a security threat. China taking the lead in the AI race, delivering cheap models, benefiting from the collective research mind by providing their models as open-weight — all this could make them immensely stronger.

On the other hand, pathetic drama between OpenAI, Anthropic and similar, regulating instead of innovating in Europe and hyping up AI’s capabilities instead of seeing what is already there is only making us weaker.

For this article I consulted a military expert who wanted to stay unnamed. We talked about the possible application of AI in the war in Iran. On February 28, 2026, a girls’ elementary school in Minab, southern Iran, was struck during the opening wave of US-Israeli airstrikes, killing between 165 and 180 people, mostly students. No official admission of responsibility has been made, but satellite imagery analysis by NPR, CBC, and CNN confirmed by multiple independent experts suggest the strike was likely a US airstrike resulting from outdated targeting information. The school had been physically separated from the adjacent IRGC military complex since 2016 but appears to have remained in an unupdated target set. It should be noted that two competing hypotheses exist: either this was a targeting failure due to stale data, or, as Al Jazeera’s investigation raises, the strike was deliberate. I will argue for the first.

I am going to break down why, in my personal opinion, the theory that an AI system misclassified the school as a military target based on outdated data is a plausible hypothesis. My expert made a point that reframes how we should think about AI use in this war:

“If you are defending against 1,000 drones, that’s why they use swarming drones — it’s about scaling AI for protection and attacks.”

The sheer scale of targets struck by US forces across Iran in the opening hours is itself evidence that AI-assisted targeting was in play. Humans cannot generate, validate, and execute hundreds of simultaneous precision strikes without algorithmic support. When you scale targeting to that degree, the margin for stale records in the target database grows.

Deliberate targeting is the kind of targeting that was used in the Iran opening strikes. A target is placed on a pre-built list weeks or months in advance, based on satellite imagery, SIGINT-derived geolocation, and intelligence analysis.

The strike goes out against a database entry. This process formally involves a chain of humans: 1) analysts who increasingly review intelligence feeds 2) commanders who evaluate the target’s accessibility and legality, a step that has been deliberately compressed under the current administration, and 3) legal advisors who verify compliance with IHL (International Humanitarian Law) which also got significantly simplified lately.

Here is an overview of the process from an official US Joint Staff / Pentagon briefing from 2009. Doctrine has changed significantly since then, and the briefing should be read as a picture of how this was supposed to work before AI entered the targeting chain.

Attack execution process from a Pentagon Briefing acquired by ACLU

Now consider what happened in Iran. The US struck nearly 2,000 targets in the first 100 hours of the operation. That list was not generated on the fly but it was built months in advance. But managing, sequencing, and executing 2,000 targets simultaneously across multiple domains still requires algorithmic support that no human coordination chain can match unaided. More importantly: maintaining a database of 2,000+ target records and keeping every single one correctly classified is exactly the kind of task that gets delegated to automated systems.

Additionally, operators must formally estimate collateral damage (CDE — Collateral Damage Estimation): the number of civilians likely to be affected by the attack:

Collateral Damage Estimation Criteria

As we can see, it is a tedious and time-consuming process to execute strikes. This is not what we see in the data, however.

In the first 12 hours alone, the US and Israel carried out nearly 900 strikes — a tempo that would have taken days or weeks in any conflict before this decade. This raises questions about the human supervision requirements outlined above. The opinion of the military expert I spoke with, that it would be highly unlikely this was achievable without AI, has also been confirmed by other sources. As Craig Jones, a senior lecturer in political geography at Newcastle University and kill chain expert, put it in The Guardian:

The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought,

To execute such a large-scale attack, with a pre-built list of 1,000 targets developed, CDE-cleared, and executed in a compressed planning and execution cycle, would be near impossible for humans alone. They would need to evaluate multimodal input: images, text, video feeds, signals intelligence, while simultaneously ensuring that multiple legal and operational conditions are satisfied, applying for permission at each level of the chain, and then executing. LLMs, as well as other models, particularly computer vision, are highly capable pattern matchers. This is exactly how Palantir's Maven Smart System works: integrated with Claude since 2024, it outputs precise location coordinates for missile strikes and prioritises them by importance. Matching targets, analysing whether conditions are satisfied, flagging requests and supporting targeting decisions can all be done within minutes.

Multiple media outlets report that one of the Claude models by Anthropic was used to support intelligence analysis and target identification during the strikes.

Ironically, the US government had banned Anthropic from all government projects just hours before the strikes began - something that quite clearly did not stop them from using Claude for military needs.

Further in the article:

  • Map pattern analysis

  • An experiment with Claude that shows it would convincingly identify the school as a primary target based on satellite imagery

  • AI usage for military purposes

  • Implications for world security, the economy, and what Europe should learn from it

Paid subscribers also get:

  • Priority answers to your messages within 48-hours

  • Access to deep dives on the latest state-of-the-art in AI

  • Free access to quaterly AI realist training.

Founding members:

  • A 45-minute one-on-one call with me

  • High-priority personal chat where I quickly reply to your questions within 24-hour

Support independent research and AI opinions that don’t follow the hype.

https://www.airealist.org/

20% off the subscription till the 9th of March!