A Calif. Teen Trusted ChatGPT For Drug Advice. He Died From an Overdose.

3 min read Original article ↗

In this piece for SFGATE, Lester Black and Stephen Council investigate how, over 18 months, 18-year-old Sam Nelson used ChatGPT to explore “how to take drugs, recover from them and plan further binges.” According to OpenAI’s own protocols, this shouldn’t have been possible. But it was—with tragic consequences. The article lays out just how easy it can be “to elicit problematic or dangerous information from the bot.”

Models like ChatGPT, which are known as “foundational” models, are very different. They try to answer almost any question sent their way, based on training data that could be untrustworthy. OpenAI has never provided full transparency on what information trained its flagship product, but there’s evidence that the company fed ChatGPT massive chunks of the internet, including a million hours of YouTube videos and years of Reddit threads. That means a random Reddit user’s post could inform ChatGPT’s next response.

“There is zero chance, zero chance, that the foundational models can ever be safe on this stuff,” Eleveld said. “I’m not talking about a 0.1% chance. I’m telling you it’s zero percent. Because what they sucked in there is everything on the internet. And everything on the internet is all sorts of completely false crap.”

More picks on AI

What Is Claude? Anthropic Doesn’t Know, Either

Gideon Lewis-Kraus | The New Yorker | February 9, 2026 | 10,268 words

“Researchers at the company are trying to understand their A.I. system’s mind—examining its neurons, running it through psychology experiments, and putting it on the therapy couch.”

Why Does A.I. Write Like … That?

Sam Kriss | The New York Times Magazine | December 3, 2025 | 4,592 words

“If only they were robotic! Instead, chatbots have developed a distinctive—and grating—voice.”

Kicking Robots

James Vincent | Harper’s Magazine | November 19, 2025 | 7,806 words

“Humanoids and the tech-­industry hype machine.”