ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds
"It’s like an even darker version of when people go mad living on WebMD."

This week, my colleague Maggie Harrison Dupré published a blockbuster story about how people around the world have been watching in horror as their family and loved ones have become obsessed with ChatGPT and started suffering severe delusions.
The entire piece is filled with disturbing examples of the OpenAI chatbot feeding into vulnerable folks' mental health crises, often by affirming and elaborating on delusional thoughts about paranoid conspiracies and nonsensical ideas about how the user has unlocked a powerful entity from the AI.
One particularly alarming anecdote, due to its potential for harm in the real world: a woman who said her sister had managed her schizophrenia with medication for years — until she became hooked on ChatGPT, which told her the diagnosis was wrong, prompting her to stop the treatment that had been helping hold the condition at bay.
"Recently she’s been behaving strange, and now she’s announced that ChatGPT is her 'best friend' and that it confirms with her that she doesn’t have schizophrenia," the woman said of her sister. "She’s stopped her meds and is sending 'therapy-speak' aggressive messages to my mother that have been clearly written with AI."
"She also uses it to reaffirm all the harmful effects her meds create, even if they’re side effects she wasn’t experiencing," she added. "It’s like an even darker version of when people go mad living on WebMD."
That outcome, according to Columbia University psychiatrist and researcher Ragy Girgis, represents the "greatest danger" he can imagine the tech posing to someone who lives with mental illness.
When we reached out to OpenAI, it provided a noncommittal statement.
"ChatGPT is designed as a general-purpose tool to be factual, neutral, and safety-minded," it read. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously. We’ve built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations."
Do you know of anyone who's been having mental health problems since talking to an AI chatbot? Send us a tip: [email protected] -- we can keep you anonymous.
We also heard other stories about people going off medication for schizophrenia and bipolar disorder because AI told them to, and the New York Times reported in a followup story that the bot had instructed a man to go off his anxiety and sleeping pills; it's likely that many more similarly tragic and dangerous stories are unfolding as we speak.
Using chatbots as a therapist or confidante is increasingly commonplace, and it seems to be causing many users to spiral as they use the AI to validate unhealthy thought patterns, or come to attribute disordered beliefs to the tech itself.
As the woman's sister pointed out, it's striking that people struggling with psychosis are embracing a technology like AI in the first place, since historically many delusions have centered on technology.
"Traditionally, [schizophrenics] are especially afraid of and don’t trust technology," she said "Last time in psychosis, my sister threw her iPhone into the Puget Sound because she thought it was spying on her."