Some industry leaders and observers have a new idea for limiting mental health tragedies stemming from AI chatbot use: They want AI makers to stop personifying their products.
Why it matters: If chatbots didn't pose as your friend, companion or therapist — or, indeed, as any kind of person at all — users might be less likely to develop unhealthy obsessions with them or to place undue trust in their unreliable answers.
The big picture: AI is in its "anything goes" era, and government regulations are unlikely to rein in the technology anytime soon. But as teen suicides and instances of "AI psychosis" gain attention, AI firms have a growing incentive to solve their mental health crisis themselves.
Yes, but: Many AI companies have set a goal of developing artificial "superintelligence."
- They often define that to mean an AI that can "pass" as a real (and very smart) human being. That makes human impersonation not just a frill but a key product spec.
- AI makers also understand that it's precisely the ability of large language model-driven AI to role-play human personalities that makes chatbots so beguiling to so many users.
What they're saying: In a blog post this week, Mustafa Suleyman — co-founder of DeepMind and now CEO of Microsoft AI — argues that "we must build AI for people; not to be a digital person."
- AI can't be "conscious," Suleyman writes, but it can be "seemingly conscious" — and its ability to fool people can be dangerous.
In a post on Bluesky addressing a report about a teen suicide that prompted a lawsuit against OpenAI, web pioneer and software industry veteran Dave Winer wrote, "AI companies should change the way their product works in a fundamental way. "
- "It should engage like a computer not a human — they don't have minds, can't think. They should work and sound like a computer. Prevent tragedy like this."
Between the lines: Most of today's popular chatbots "speak" in the first person and address human users in a friendly way, sometimes even by name. Many also create fictional personas.
- These behaviors aren't inevitable features of large-language-model technology, but rather specific design choices.
- For decades Google search has answered user queries without pretending to be a person — and even today the search giant's AI-driven overviews don't adopt a chatbot's first-person voice.
Friction point: Suleyman and other critics of anthropomorphic AI warn that people who come to believe chatbots are conscious will inevitably want to endow them with rights.
- From the illusion of consciousness it's one short hop to viewing an AI chatbot as having the ability to suffer or the "right not to be switched off." "There will come a time," Suleyman writes, "when those people will argue that [AI] deserves protection under law as a pressing moral matter."
- Indeed, OpenAI CEO Sam Altman is already suggesting what he calls "AI privilege" — meaning conversations with chatbots would share the same protections as those with trusted professionals like doctors, lawyers and clergy.
The other side: The fantasy that chatbot conversations involve communication with another being is extraordinarily powerful, and many people are deeply attached to it.
- When OpenAI's recent rollout of its new GPT-5 model made ChatGPT's dialogue feel just a little more impersonal to users, the outcry was intense — one of several reasons the company backtracked, keeping its predecessor available for paying customers who craved a more unctuous tone.
In a different vein, the scholar Leif Weatherby — author of "Language Machines" — has argued that users may not be as naive as critics fear.
- "Humans love to play games with language, not just use it to test intelligence," Weatherby wrote in the New York Times. "What is really driving the hype and widespread use of large language models like ChatGPT is that they are fun. A.I. is a form of entertainment."
Flashback: The lure and threat of anthropomorphic chatbots has been baked into their history from the start.
- In the 1960s MIT's Joseph Weizenbaum designed Eliza, the first chatbot, as a mock "therapist" that basically mirrored whatever users said.
- The simulation was crude, but people immediately started confiding in Eliza as if "she" were human — alarming and disheartening Weizenbaum, who spent the rest of his career warning of AI's potential to dehumanize us.
The bottom line: Most of us understand that chatbots aren't people, but for many, the illusion's allure is more potent than its dangers.
Go deeper: In Ars Technica, Benj Edwards explains why language models can't have personalities but do a good job of fooling us.