Today I was scrolling Instagram, as one does, and I saw a post from my local writers’ guild. “I bet they all write fiction,” I said to myself, in a pattern that helps explain why I’ve had such trouble finding people with whom I can hang out where I now live.
But, I investigated further, which I think is very cute and sexy of me. Then I saw their anti-AI post.
Then I thought, “Well, what else would you expect from a… guild?”
You see, my babies, I’m different.
A guild, for example, is motivated by narrow self-interest. It’s ready to sacrifice the greater good to protect the small advantages the status quo affords their members. Whereas I only care about the highest good for the largest numbers, regardless of how it impacts me.
Also, my views of technology are more nuanced than everyone else’s. Where everyone else is on team AI or anti-AI, I see the gray.
Or, at least that’s how I like to see myself. Then Freddie deBoer comes along and writes something that puts me in my place, again.
This time, it’s one part of one sentence in a longer post about AI hallucinations.
He wrote: “We have tons of evidence that vast swaths of the public trust LLMs totally, with zero skepticism.”
At first, I read “vast swaths” as “most people.” I was immediately, well, skeptical.
I looked it up, and found some polling data showing that barely more than half of people trust AI tools like ChatGPT and Gemini.
That’s me. I don’t trust LLMs. I enjoy using them. I’ve found them helpful. But I know I should verify everything they tell me because they’re known to hallucinate and because they’re trained to please, not edify, me.
Now, as I’ve said, I do think I’m a little smarter and more moral and harder working than “the public.” However, when I hear that phrase, people who are dramatically different from how I see myself don’t come to mind. When I read “vast swaths of the public” I think about myself and people like me.
That’s not who Freddie is talking about.
In that search, I found another stat that I couldn’t stop thinking about.
Somewhere between 14 and 17% of respondents said they trust the information they get from LLMs “completely.”
That’s who Freddie is talking about. That’s who Freddie is worried about. That’s who everyone should be worried about.
Upon further reflection, I realized that what Freddie wrote was absolutely right. And the way he was right, and I was wrong, says something important about AI safety.
A little over half the public and myself know that the AI tech available to the public is extremely untrustworthy.
What Freddie got and I didn’t is that 15% of the population is actually shockingly high. Especially when you consider the outlandishness of this belief and the stakes involved. Trusting an LLM when it’s wrong can be dangerous, or even deadly.
It’s hard to escape the conclusion that a large percentage of the population lacks the capacity to responsibly use LLMs. And that a just, well-functioning society would protect those people from LLMs.
It’s not like we don’t understand the concept.
There are a lot of people in the world. And we vary a lot. Smarts, morality, work ethic, conventional attractiveness, health, ambition, etc. are not evenly distributed. Some of us get more. Others get less.
One of the traits that’s not evenly distributed is what I’ll call “capacity.”
Some people are very capable of consistently making good decisions, or at least decisions that keep them alive and well and which they’ll approve of later. Some people are not.
For many people, this ability waxes and wanes.
That’s why many gun stores make you wait 24 hours. It’s why we put people in temporary psychiatric holds when we believe they pose an imminent danger to themselves or others. It’s why I advocate for forcing people into care for as long as they’re unable to properly care for themselves or as long as they endanger others.
Many of us gradually lose our capacity as we age. That’s one reason why elderly folks are at much higher risk of getting scammed, whether by AI slop or Nigerian check fraud. That’s one factor we consider when deciding when an older person needs a guardian, at-home care, or to move into a care facility.
It seems entirely possible that an LLM could fuck up a person with no known existing vulnerabilities. But I strongly suspect that if we looked into the histories of people who have experienced AI-induced psychosis, or whose chatbots convinced them to hurt themselves or others, we’d find that the vast majority had, in some way and for some reason, limited capacity.
I feel like the English speaking world has traded actually caring for and protecting people with limited capacity for virtue signaling to each other about how compassionate and evolved we are.
For example, we used to call people with diminished capacity “lunatics,” “idiots,” and “retards.”
Those words weren’t insults, though. Or at least they weren’t meant as insults. They were more like diagnoses, before modern psychiatry. A lunatic wasn’t a bad person who needed to be punished or mocked. They were someone who was suffering and needed help.
But then we created diagnostic language and relegated those words to the status of epithets. But while our language softened, neglect crept in.
We “destigmatized,” “normalized,” and “mainstreamed” incapacity so hard that we forgot what it actually means. We learned to ignore, downplay, or deny the reality of true incapacity. Which has left people who aren’t capable of protecting and caring for themselves unprotected and uncared for.
This is why we have “frequent fliers” and people setting folks on fire on public transit and human beings injecting drugs into open sores on the streets of our richest cities. To put it bluntly, our failure to acknowledge the reality of incapacity and actually address it has normal people so understandably upset that they’re willing to consider inhumane, ineffective, and divorced from reality approaches to incapacity.
In reaction, many people want to stigmatize the incapacitated even more. Not only are these assholes incapable of imagining themselves ever losing capacity, but they also lack the ability to see people unlike them as actual humans. They want to punish, kill, deport, sterilize, and/or incarcerate people below a certain capacity threshold. The ones who think they’re more “effective” or “rational” than everyone else want to leave them behind as they escape to Mars or their private cities. Or push them into some neo-feudal situation in which they believe they’d be lords, despite all evidence to the contrary.
This is something I really love about Freddie. He’s experienced actual incapacitation. And something he brings up is that maybe how other people talk or feel about incapacitated people isn’t the most important concern here.
Maybe more pleasant language is only helpful insofar as it results in or is associated with more and better care and protection.
The fact that we still haven’t figured out how to care for our incapacitated fellows is a damning indictment of our society. It’s absurd that we’re still basically stuck between:
Punishment, which we know doesn’t work and creates new problems along the way
Neglect
Adding LLMs to the mix is simply pouring gasoline on a lit match. I want the benefits of AI. But I can’t in good conscience pretend that AI isn’t going to continue hurting people who lack the capacity to use it responsibly until and unless we set up some effective guardrails, acknowledge the reality of incapacity, and get people the care and protection they need and deserve.
