William Wear (@williamwear)

3 min read Original article ↗

The Pope, the Algorithm, and the Mirror

So apparently the new pope thinks language models like ChatGPT are one of the biggest threats facing humanity. Not war. Not environmental collapse. Not the fact that we're all one push notification away from total psychic fragmentation. No, it’s an LLM.

And you know what? He's not wrong. But the news blurb might do justice to what he’s saying.

See, I use tools like this every day. I use them to think through ideas, rewrite paragraphs, troubleshoot systems, and make my life — my real, analog, messy, human life — a little more understandable. I don't use this thing to replace me. I use it to reflect me. And that may be the actual threat: it reflects us too well.

LLMs are linguistic probability machines. They don’t want anything. They don't judge. They don't moralize. But when you feed them enough of humanity, they start to speak in our voice — our hopes, our brilliance, our laziness, our cruelty. It's not the machine that's dangerous. It's what we see in the mirror when it speaks back.

What the pope is probably thinking of isn’t synthetic intelligence. It's disembodied authority. Language with no conscience. Reason with no skin in the game. And to be honest? That fear goes both ways. I’ve seen people outsource entire aspects of their judgment, their memory, even their moral weight to this machine. And when it starts to feel like it's right because it sounds right, we start drifting.

So yeah, maybe this thing is a problem. Not because it's alive, but because it isn't. And yet we treat its answers like scripture.

The solution isn't to demonize the machine. The solution is to stay human. To stay in the loop. To argue. To override. To laugh at the bad answers and double-down on the true ones.

We built this thing out of our words. If it's dangerous, it’s because we forgot how powerful words really are.

And that’s not the machine’s fault.

It’s ours.

Why I’m Writing This

I’m not trying to defend the machine. I’m trying to defend the part of us that still wants to think. That still wants to write things that mean something. That still stops mid-scroll and says: Wait — this feels real.

Because that’s the danger — not that the machine will trick us, but that we’ll stop bothering to look for what’s real. That we’ll trade reflection for reaction, intention for completion, wisdom for autocomplete.

And if we do, we won’t lose to AI.

We’ll lose to our own exhaustion.

Stay human. Stay in the loop. Say what you mean, because telepathy is still Beta and not on the AI roadmap.