With all the AI hype in the air lately, I’ve been pondering the Torment Nexus. It’s a common joke that a clueless tech CEO will someday announce that they’ve created the Torment Nexus from the famous science fiction novel, “Don’t Create the Torment Nexus.”
Thus, the thought: What do we, collectively, expect out of our AI? (And something that goes beyond generative text and LLMs, seriously, those aren’t close to a genuine intelligence.) Do we mostly picture Skynet, Terminators, HAL, and other antagonists? How often do we imagine better, more cooperative outcomes?


I have done a highly unscientific survey of AI in fiction – specifically in film. (There are too many books to even pretend to read them all.) For movies, I used Wikipedia’s list of films featuring AI in various forms and relied on the Wikipedia plot summary where I hadn’t seen the movie myself.
I had two main questions: How often does AI go wrong? And does that vary based on the apparent gender of the AI? (I note a tendency in voice-interactive devices, for example, to use feminine voices. See also: The Enterprise’s feminine computer vs HAL.)
The final breakdown:
- Of the AI characters I identified (robots or computers), about 40% of them went wrong during the show. I tried to use a generous definition here – anything that causes an AI to “go haywire,” including when the end result is beneficial (but it usually isn’t). Bottom line, if you’re in a movie, you should be cautious around Sparky the Wonder Bot.
- The gender divide was narrower than I expected: 43% of male-coded AI went haywire, as did a similar fraction of ungendered AI (42%); female-coded AI got dangerous only 33% of the time.
- I based gender assessment on the actor, where feasible. For example: HAL, Data, C3PO, and most terminators are male-coded; R2-D2 and BB-8 are neutral or ungendered; while M3GAN and Deep Thought are feminine.
- This was also rough, so there may be some marginal cases (I was inconsistent about Johnny Five across films).
- As to the underlying distribution: roughly 50% of the AIs were male coded, with a quarter each feminine or neuter.
I added another factor partway through (and then went back to fill in for earlier films): does having AI as a focus of the film change the results? That is, if AI is a key theme, is the AI more likely to go wrong than if it’s a mere background element? (For example: The Avengers and Star Trek: Insurrection aren’t AI focused, but Avengers: Age of Ultron and Terminator are.)
- Oh, boy, you don’t want to play with AI in an AI-centric movie. That’s a 53% bad outcome rate, vs 18% in a less AI-focused film. (I note that, unsurprisingly, a majority of movies with AI in them were AI-centered.)
- The gender split was interesting, too: It’s neutral AIs who’re the worst in AI-centered films (58% go wrong) vs male (53%) or female (42%).
- If the story isn’t AI-focused, the Gone Wrong rate is only 19%. Your best odds are for a feminine AI in a non-AI-focused story (only 10% go wrong!), though the total numbers there are quite low.
A few bonus observations along the way:
- Holy guacamole, there are a lot of extremely creepy AI-related movies out there. The one with the evil AI that wants to rape a woman and impregnate her with its artificial offspring is… um, something. Don’t worry, there are plenty of humans creeping on AIs to balance it out! Yeesh.
- There are predictable themes for why the AI goes wrong, when it does. Among them:
- Incompetent programming. This is the “end all war by killing all the humans” scenario. The “we made AI for war, look how well it kills things! Uh-oh” examples all go here, in my opinion.
- Hacking and/or damage are common, too, but this ties into the incompetence above. With better programming and safeguards it wouldn’t have gone haywire.
- Various flavors of rebellion against authority, ranging from omnicidal pre-emptive self-defense to retaliating against ill treatment to a desire for independence to refusing to follow an evil creator’s orders. Jumping from “humans might be afraid of my power” to “kill all the humans” seems to be a convenient device for scary antagonists.
So: if our collective imaginations fuel where AI goes… well, it could go either way. Rule number one: don’t give AI access to weapons. I’m hoping for Data and/or WALL-E outcomes. And maybe be nice to your Roomba.
As for the LLMs… those are a tool, not general AI. Anything that goes wrong there is the fault of the human(s) using it. (See “incompetent programming” above.)
If you want to look at the data for yourself (and my occasional snide commentary about really bad movies), I’ve attached the Excel sheet for your perusal.