Why the US may blame AI for blowing up an Iranian girls school, killing 175 people

3 min read Original article ↗
Mass graves for students killed (AFP/Getty Images)

I’ve written for 2 years about the human fallacy of marketing probabilistic chatbot output with anthropomorphism at the absurd scammer levels that Sam Altman has, which has predictably continued to new extremes in recent months. But I want this post to focus on the marketing demographic, which in this case appears to be Pete Hegseth, who heads the current operation “Epic Fury” in Iran.

Let’s ignore the fact that the name “Epic Fury” sounds like the very average mean output of a prompt I would have enjoyed to see, so this post focuses on facts.

We’ve heard Pete Hegseth say the US military strategy is “AI-first”, and “the fastest innovator wins in modern warfare,” while never discussing his AI strategy from the technical or analytical perspective, as this talk on the “Military AI Revolution” shows.

I presumed he would be humbled quickly by AI, but let’s review the atrocity that happened last week:

According to eyewitness accounts, the first day of the attack by the US and Israel on Iran involved bombing a girls school in Iran (note my news source is wikipedia here, and I attempt best sources only to backup the facts I’m stating in this post), killing 175 people in the morning during a break in classes.

The school was adjacent to a military target, another series of buildings in the same complex and initially an issue for which blame could be placed on each side, but the bombs were using precise location that specifically targeted each building, the school being a separate building in the same complex with its own coordinates. The attack destroyed the school building specifically with its own independently programmed bomb (and other buildings in the complex with their own bombs which apparently destroyed their own targets as intended afaik).

In the following days, Iran blamed the US and Israel, and the mass burial was 3 days after the bombing.

7 days after the bombing, Iran published a video as proof that it was a US Tomahawk missile that hit the school. In theory, this could be AI in future warfare, but the New York Times had verified.

When confronted about this point-blank by a reporter on the same day, Trump “thinks it was still done by Iran”, with no explanation of nuance, while Hegseth said he was “still investigating”, even though this was 7 days after the attack.

To me, the military’s AI appears to be the culprit - they used a tool created by Palantir to presumably pull from multiple sources to identify targets for bombs. Palantir’s tool isn’t likely complex, since it uses the same technology as Google’s Notebook LM, but likely marketed to the military as a “special agent” to exemplify AI marketing these days.

Most relatively intelligent people would want to verify AI output as correct, let alone a military in a high profile attack with many risks, as the output is only as good as the verification talent can be.

The US military appears to have skipped some steps, because presumably “the fastest innovator wins modern warfare” without acknowledging (yet) the consequences of being labeled a war crime and humanitarian crisis by UNESCO, as he will likely learn the costs of being “AI first” without the right organization.

My hypothesis is that Pete Hegseth will blame AI for this rather than accept responsibility for a war crime.

Discussion about this post

Ready for more?