A Calif. teen trusted ChatGPT for drug advice. He died from an overdose
sfgate.comSo "drug advice" refers to drug abuse, not medicine. ChatGPT gave several bad replies, but this was in between constant warnings about the dangers, which were ignored. The guy even told ChatGPT to "[not] get into the medical stuff about the dangers".
Why did ChatGPT give the bad replies? It appears that it fell for the false "harm reduction" narrative. This should obviously be improved.
Saying that he "trusted ChatGPT for drug advice" or attributing the overdose to ChatGPT is straight up misleading. This is ragebait, and clearly not from a reliable source.
We should develop some methods to detect the nonsense in the coherent stories that LLMs are telling