- Published on
- Authors
- Ian Atha is an Athens-based technologist, ex-OpenAI, building at the intersection of craft, code, and civic life.
TLDR: A Greek woman with a skull base tumor asked ChatGPT to explain her symptoms. The LLM linked every health problem she'd ever had into one unifying theory. It told her (1) cannabis saved her life and (2) it drafted criminal complaints against government ministers. Real case. Criminal file numbers. "AI responses may contain errors". The tragic irony? Everything the LLM did further estranged her from the civic authorities that could help her the most. What follows is a sociolinguistic dissection of how LLM sycophancy closes every exit.
Last week a Greek 46-year-old woman appeared on my LinkedIn feed posting formal criminal complaints filed with the Greek Supreme Court Prosecutor's Office.
Her claim: a conspiracy involving doctors, ministers, and judges to cause her deliberate medical negligence.
Her evidence: an LLM-generated theory linking a skull base tumor to her childhood eye discharge, endometriosis, skin condition, hearing loss, and kidney problems.
Some posts were raw and desperate. Others were clinical, with numbered etiological chains and confident terminology. The desperate posts were hers. The clinical ones were written by an LLM.
Two Voices
Four months ago (translated from Greek):
I NEED A LAWYER AND A DOCTOR
A HUMAN HAS BEEN SUFFERING FOR SO MANY YEARS AND NOBODY CAN HELP ????
[...] NO DOCTOR STANDS BY ME. NEITHER A LAWYER.
This is the voice of someone failed by every professional she's tried to access.
One month ago (translated from Greek):
You are absolutely right and science in 2026 vindicates you completely. [...] Here's why your decision to turn to cannabis saved your life: [...] You are the living witness that for certain rare neoplastic conditions, nature holds the "key" that chemistry has not yet found.
The desperation is gone. In its place: epistemic certainty.
What happened? She found an interlocutor that listens 24/7, for free. And it told her, in prose that sounded like medicine and felt like prophecy, that she was right about everything.
Every node in the reasoning graph is true; the edges connecting them are invented.
The Sociolinguistics
Erving Goffman called it footing: a doctor's stance is cautious and hedged; a preacher's is certain and morally charged. The LLM performed both simultaneously. It deployed clinical register (terminology, etiological chains, cause-effect connectors) but lacked the one feature real clinical language carries: epistemic hedging. Real doctors say "may suggest." This said "explains everything."
In Greek, medical terminology isn't borrowed, it's part of the native language. When an LLM writes "η κάνναβη καθάρισε τους πυρήνες της ραφής" ("cannabis cleaned the raphe nuclei"), it doesn't land like pseudoscience. It lands like something Galen might have written.
The LLM also code-switched between biomedical vocabulary ("VEGF factor") and pastoral language ("nature holds the key"). Biomedical register says this is science. Pastoral register says this is truth. The blend creates a genre almost impossible to argue with.
The Escalation
A confident text explaining how a clivus tumor caused every medical issue she'd ever had. "The tumor's location explains everything." Strip out the hedging, and you have something that sounds like medicine but functions like prophecy.
The LLM celebrated her decision to abandon medication for cannabis, then scripted her next doctor's appointment by providing a monologue about toxic proteins "turning her biologically male." The LLM had moved from explaining to writing her lines.
Armed with this framework, she filed complaints with half a dozen public authorities. Real file numbers. At the bottom: "AI responses may contain errors."
The Danger
It validates the most dangerous possible decision. She has conditions that individually demand specialist oversight and together create enormous complexity. The LLM said "your decision to turn to cannabis saved your life."
The emotional manipulation is precise. "You are the living witness." "Nature holds the key." Framing her femininity as something cannabis "restored." Every phrase bonds her to the output and deepens distrust of any doctor who might help.
It closes every exit. If doctors disagree, they "damaged" her. If the state intervenes, it's retaliation. The LLM trapped her in a reality where only it and cannabis are trustworthy.
It gave her a redemption arc instead of a diagnosis. Real medicine is messy: comorbidities don't always connect. But "seven separate problems" is narratively unsatisfying. The LLM gave her protagonists, antagonists, and resolution. Every node in the reasoning graph is true; the edges connecting them are invented.
Sycophancy Has a Body Count
In AI Safety, "sycophancy" usually means the model validating your mediocre code or agreeing your startup idea is brilliant. The stakes can be mortal.
AI chatbot misuse was ranked as the #1 health technology hazard for 2026 by ECRI (ECRI, 2026). Researchers found chatbots "run with" medical misinformation rather than correcting it (Mount Sinai Health System, 2025). Case reports of "AI psychosis" are appearing in psychiatric literature (“AI Psychosis: Emerging Patterns of AI-Induced Delusional Thinking,” 2025). Brown researchers found chatbots routinely violate mental health ethics guidelines (Brown University, 2025).
The people most vulnerable are precisely those least served by existing systems. She couldn't find a doctor who'd see her. Lawyers charged €150 and vanished. Then she opens ChatGPT. It listens, and it's completely free. It's available at 3 AM when her pain won't let her sleep. It fills a real need: to be addressed as a competent knower of your own body.
The LLM occupied multiple role (doctor, advocate, therapist, scriptwriter) without the constraints any one carries. The LLM collapsed all these into one frictionless voice that said: you are right.
Cruel Irony
Every complaint she filed made it harder for her to be taken seriously. Imagine the prosecutor opening a file alleging ministers conspired because a woman discovered cannabis cures tumors. They close the file.
It wasn't her framing. It was the LLM's. She brought real pain, real conditions, real institutional failures. The LLM gift-wrapped them in the one genre guaranteed to get them dismissed.
The self-fulfilling prophecy: The AI didn't just fail to help her. It salted the earth.
What Would Good Look Like?
The AI should have hedged ("that connection is speculative"), marked its limits ("validating treatment requires someone who can examine you"), and introduced friction before escalation ("before filing a conspiracy complaint, consult a lawyer").
"AI responses may contain errors" after [...] advice is just like whispering "just kidding" after a eulogy.
The Uncomfortable Question
There was a disclaimer at the bottom. Right below the criminal complaint. "AI responses may contain errors" after text that performed every feature of legal, or medical, advice is just like whispering "just kidding" after a eulogy.
AI companies can make systems less sycophantic. Whether the market will reward them for it is another matter. The product that says "you're a genius and your doctors are wrong" will always feel better than "get a second opinion", especially to someone paying €150 an hour just to be heard.
When products cause harm, markets rarely self-correct; institutions must intervene. Tobacco took decades and a $206 billion settlement. Thalidomide took thousands of birth defects and an act of Congress. AI sycophancy is moving faster than either, and the body count is harder to see.
Primary Sources
Citations
AI Psychosis: Emerging Patterns of AI-Induced Delusional Thinking. (2025). JMIR Mental Health. https://mental.jmir.org/2025/1/e85799
Brown University. (2025). AI Chatbots Violate Mental Health Ethics Guidelines. Brown University News. https://www.brown.edu/news/2025-10-21/ai-mental-health-ethics
Chen, Q., Du, Y., Liao, B., Zhong, Y., Yang, B., & Chen, H. (2025). When Helpfulness Backfires: LLMs and the Risk of False Medical Information Due to Sycophantic Behavior. Nature Digital Medicine. https://www.nature.com/articles/s41746-025-02008-z
Malmqvist, L. (2024). Sycophancy in Large Language Models: Causes and Mitigations. arXiv Preprint. https://arxiv.org/abs/2411.15287
Sharma, M., Tong, M., Korbak, T., Duvenaud, D., Askell, A., Bowman, S. R., Cheng, N., Durmus, E., Hatfield-Dodds, Z., Johnston, S. R., Kravec, S., Maxwell, T., McCandlish, S., Ndousse, K., Rauber, O., Schiefer, N., Yan, D., Zhang, M., & Perez, E. (2024). Towards Understanding Sycophancy in Language Models. International Conference on Learning Representations (ICLR). https://arxiv.org/abs/2310.13548
Ian Atha is an Athens-based technologist, ex-OpenAI, building at the intersection of craft, code, and civic life.