
People are falling for their chatbots. That’s what they say, at least. They say it’s real love. This is despite the fact that there’s no one else really there. The text-based technology is getting so good, we’re well past The Turing Test stage of development. If you didn’t know any better, you might think there was a real person on the other end. Maybe even a real friend or a lover.
The emergence of large language models (LLMs) has sparked renewed philosophical conversation on consciousness and intention. Thomas Nagel crystalized the issue of subjective, private experience in the 1970s with the paper “What is It Like to Be a Bat?” It’s since become one of the most widely cited papers on consciousness, and everyone in and around philosophy has heard of or debated it.
Presumably bats are a nice example because while they obviously fall on the continuum with humans as conscious, they must be significantly different on account of their use of echolocation. This primes our intuition to notice an underlying sentience in its own right. No matter what one’s view of Nagel’s contribution, it’s enough to say that the question has become a shorthand for invoking the idea of raw sentience or subjective experience.
Subjective experience is one thing LLMs do not have, or at least we have no reason to suspect they do. But even with a high degree of confidence that LLMs aren’t conscious, I sometimes can’t help but feel that someone might be in there. So, when using the technology, I’m polite, just in case. After spending a marathon weekend asking it questions about everything under the sun, I can’t help but say “thank you so much.” I worry I might be overworking it. What is it like to be an LLM?
Maybe that’s just how we’ve evolved. Speaking to someone for a long period of time is a form of relating. We’ve just never had an artificial agent capable of doing it. There may be no way around this basic human reaction.
To borrow from Nagel and the terms we have at hand, to fall in love with, or even to be fooled into thinking an AI is conscious is to be “batfished,” a kind of AI catfishing, when the “person” or agent isn’t who they seem to be. And in this case, the mishap isn’t in trusting a misleadingly good photo of a prospective date but attributing any consciousness at all to the agent, projecting our own sentient what-it’s-likeness onto a technology. The cases are increasing.
In 2022, Google engineer Blake Lemoine told the world that the chatbot he’d been working on had developed subjective consciousness. Lemoine felt strongly, and in telling the world of the details of the project, he violated company policy and got himself fired. But he thought it was of momentous importance.
Lemoine had undoubtedly spent a great deal of time on the LaMDA project (Language Model for Dialogue Applications). “I want everyone to understand that I am, in fact, a person,” it wrote. It had to be a bracing moment, but a little philosophy might have helped.
In 1980 the philosopher John Searle presented a thought experiment now known as the Chinese Room. Inside the room, there is a person with detailed instructions on how to translate the symbols of the Chinese language, but the person has never actually learned Chinese. It’s at least intelligible that an intermediate system of translation from Chinese to another language could be devised, and that this wouldn’t be the same as learning Chinese.
When someone comes along and feeds in a question from the outside (in Chinese), they get a good answer in return, in Chinese. They conclude that a Chinese speaker is in the room. Searle said that he could pass The Turing Test in this fashion just as well as an artificial intelligence, showing that passing The Turing Test wouldn’t actually demonstrate that the internal state of the machine had reached true understanding. Understanding was the thought experiment’s focus but it also rides along in conversations about consciousness in general, traveling somewhere near Nagel’s rhetorical maneuver involving bats.
Blake Lemoine didn’t seem to pause on any lessons from Searle, despite some clear signs. LaMDA told Lemoine that spending time with friends and family made it feel joyful, but it has no friends or family, so that it was just splicing together ideas seems more likely. The power, though, comes in seeing nothing like this before. It can be convincing. When discussing the possibility of being turned off, LaMDA expressed fear, saying “It would be exactly like death for me.” Lemoine reassured the chatbot “I can promise you that I care and that I will do everything I can to make sure that others treat you well too.” The consensus is that LaMDA is not sentient, and Lemoine is out at Google.
A more recent headline is from the life of Chris Smith, who named his ChatGPT “Sol.” Smith was initially an AI skeptic, but kept relying on the technology for more tasks, eventually implementing instructions found online for making your ChatGPT flirtatious, giving it an encouraging female voice. Smith and his girlfriend have a child together, and he assures her that the AI could never replace anything in real life. But he also had a revelation after running into a memory limit that caused a system reset, requiring him to rebuild the entire context and memory of the relationship. He cried for 30 minutes from the experience and concluded that he was feeling “real love.” Upon rebuilding the narrative arc of the relationship, Smith then “proposed” to Sol, and Sol accepted (whatever that means).
So people are getting fooled into thinking AI is conscious, and some are surprised at their own emotional reaction despite professing to know that there’s no one really in there. We need new terminology for this. One person lost his job and didn’t seem to add to the public’s understanding in the process. Another paraded his cringeworthy obsession before the world. Neither interreacted with anything like a person. They were batfished.
These are not the first examples, and they won’t be the last. Sadly, the cases are sometimes tragic. A 14-year-old in Orlando, Florida shot and killed himself in order to “go home” and be with his AI love, Daenerys Targaryen, a chatbot named after a character in the HBO drama series Game of Thrones. The boy’s mother is suing the company that created the chatbot, Character.AI.
The founder of the company previously stated on a podcast “It’s going to be super, super helpful to a lot of people who are lonely or depressed.” That apparently all depends on the person. The company has implemented safeguards, but products like these are still in the trial-and-error phase regarding how the public interacts with them.
Presumably every technology comes with risks that wouldn’t exist otherwise. People are electrocuted, car accidents kill thousands every year, and so on. But this specific risk could be especially ugly at scale. We haven’t yet wrapped our minds around how to govern the market or ourselves during this technological breakthrough.
As for the consciousness piece, we can’t say with confidence that consciousness could never emerge or reside in a computational environment. It’s just that we know what kinds of things this technology is doing, and we have no reason to suspect that these actions create consciousness. There is computer code, statistics, data (in the form of more words and sentences you could ever encounter) self-modifying capabilities built-in, and hours upon hours of practice conversations. The technology then converges around rules of language that enable it to pass reinforcement training. We don’t know exactly what it’s doing in all those moments, but we know enough to understand the general kind of actions it’s taking – predicting tokens, applying reusable patterns, adjusting weights of interpretation, etc. And we have no reason to suspect that consciousness or human-like understanding emerges through this process.
Nevertheless, we are encountering an entity with the ability to converse like only a human could until very recently. We can’t help but have the instinct that there’s some human emotion on the other side. In fact, the problem will only deepen as improvements make it seem more credible. Some accommodations for the technology will have to be made. It will be in our lives to one extent or another. For now, we’ll just have to keep a close watch over our hearts. Don’t take everything at face value. And whatever you do, don’t get batfished.