Deaths Linked to AI Chatbots
en.wikipedia.orgThere will of course never be an equivalent list of possible deaths/suicides prevented by AI chatbots.
For example, https://old.reddit.com/r/traumatoolbox/comments/1kdx3aw/chat...
These are obvious extremes.
How many people are stuck in the middle, having less extreme beliefs reinforced by a sycophantic AI?
I've started to hear whispers among friends that there are many founders stuck in loops of "planning" with AI, reinforcing banal beliefs and creating schizophrenia-like symptoms.
While I'm sympathetic to bereaved families, I find it difficult to assign much blame to AI providers for this sort of thing.
Developed countries have a suicide rate around 11 suicides per 100,000 people, per year [1]. So if an AI provider has 700 million weekly active users, every year we'd expect 77,000 suicides by people who'd used the service in the last 7 days.
[1] https://www.oecd.org/en/publications/society-at-a-glance-202...
People suiciding and a chatbot inciting you to suicide aren't the same thing.
Blaming these deaths on chatbots seems kinda sketchy. These people all had preexisting mental health issues, and may have died whether they used ChatGPT or not.
This reminds me of the moral panic over video game addiction in the 90s.
Everyone has preexisting mental health issues. The main question is, do LLMs make them worse?
Is there a deaths related to social media, search engines, or newspapers wiki page?
Sort of. None of these are that specific, but Wikipedia has several "Deaths related to X" lists. Probably the most well known is https://en.wikipedia.org/wiki/Lists_of_unusual_deaths
https://en.wikipedia.org/wiki/Internet_homicide
https://en.wikipedia.org/wiki/List_of_selfie-related_injurie...
https://en.wikipedia.org/wiki/Social_media_and_suicide
https://en.wikipedia.org/wiki/List_of_suicides_attributed_to...