AI's mental health fix: stop pretending it's human

4 min read Original article ↗

Some industry leaders and observers have a new idea for limiting mental health tragedies stemming from AI chatbot use: They want AI makers to stop personifying their products.

Why it matters: If chatbots didn't pose as your friend, companion or therapist — or, indeed, as any kind of person at all — users might be less likely to develop unhealthy obsessions with them or to place undue trust in their unreliable answers.

The big picture: AI is in its "anything goes" era, and government regulations are unlikely to rein in the technology anytime soon. But as teen suicides and instances of "AI psychosis" gain attention, AI firms have a growing incentive to solve their mental health crisis themselves.

Yes, but: Many AI companies have set a goal of developing artificial "superintelligence."

What they're saying: In a blog post this week, Mustafa Suleyman — co-founder of DeepMind and now CEO of Microsoft AI — argues that "we must build AI for people; not to be a digital person."

In a post on Bluesky addressing a report about a teen suicide that prompted a lawsuit against OpenAI, web pioneer and software industry veteran Dave Winer wrote, "AI companies should change the way their product works in a fundamental way. "

Between the lines: Most of today's popular chatbots "speak" in the first person and address human users in a friendly way, sometimes even by name. Many also create fictional personas.

Friction point: Suleyman and other critics of anthropomorphic AI warn that people who come to believe chatbots are conscious will inevitably want to endow them with rights.

The other side: The fantasy that chatbot conversations involve communication with another being is extraordinarily powerful, and many people are deeply attached to it.

In a different vein, the scholar Leif Weatherby — author of "Language Machines" — has argued that users may not be as naive as critics fear.

Flashback: The lure and threat of anthropomorphic chatbots has been baked into their history from the start.

The bottom line: Most of us understand that chatbots aren't people, but for many, the illusion's allure is more potent than its dangers.

Go deeper: In Ars Technica, Benj Edwards explains why language models can't have personalities but do a good job of fooling us.