OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
arstechnica.com> instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
And what action did they take when he violated the terms? I am betting no action. That's really a point in favor of the plaintiff.
I dunno, this reads overwhelmingly like the parents trying to deflect responsibility. Cries for help ignored, the thing about the medication, been going on for five years, and he mentioned he was reading other forums too, so it's not like it was just chatgpt egging him on or something. I don't see how you could even pretend this is chatgpt's fault and not the parents'.