By — John Yang John Yang By — Kaisha Young Kaisha Young Leave your feedback Share Copy URL https://www.pbs.org/newshour/show/what-to-know-about-ai-psychosis-and-the-effect-of-ai-chatbots-on-mental-health Email Facebook Twitter LinkedIn Pinterest Tumblr Share on Facebook Share on Twitter Warning: This story discusses suicide. If you or someone you know is struggling with depression or suicidal ideation, you can call 988 to access the 988 Suicide & Crisis Lifeline or find help online at https://988lifeline.org. Transcript Audio The parents of a teenager who died by suicide have filed a wrongful death suit against ChatGPT owner OpenAI, saying the chatbot discussed ways he could end his life after he expressed suicidal thoughts. The lawsuit comes amid reports of people developing distorted thoughts after interacting with AI chatbots, a phenomenon dubbed “AI psychosis.” John Yang speaks with Dr. Joseph Pierre to learn more. Read the Full Transcript Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors. John Yang: First, we should warn you that this story discusses suicide. This past week, the parents of a 16-year-old who took his own life filed a wrongful death suit against OpenAI, which owns ChatGPT. They say that after their son expressed suicidal thoughts, ChatGPT began discussing ways he could end his life.The lawsuit is one of the first of its kind, but there have been a number of reports about people developing distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. The repercussions can be severe, causing some users to experience heightened anxiety and, in extreme cases, to harm themselves or others.It's been dubbed AI psychosis. Dr. Joseph Pierre is a clinical professor in psychiatry at the University of California, San Francisco. Dr. Pierre this is not an official diagnosis yet. It's not in any diagnostic manuals. How do you define AI psychosis?Dr. Joseph Pierre, Clinical Professor of Psychiatry: Well, psychosis is a term that roughly means that someone has lost touch with reality. And the usual examples that we encounter in psychiatric disorders are either hallucinations where we're seeing or hearing things that aren't really there, or delusions which are fixed false beliefs, like for example, thinking the CIA is after me.And mostly what we've seen in the context of AI interactions is really delusional thinking. So these are delusions that are occurring in this setting of interacting with AI chatbots. John Yang: Are some people more susceptible to this than others? Joseph Pierre: Well, that's really the million dollar question. I distinguish between AI associated psychosis, which just means that we're seeing psychotic symptoms in the context of AI use. But I also talk about AI exacerbated psychosis or AI induced psychosis.So the real question is this happening in people with some sort of preexisting mental disorder or mental health issue and the AI interaction is just fueling that or making it worse? Or is it really creating psychosis in people without any significant history?And I think there's evidence to support that both are happening. Probably it's much more common that it's a worsening or exacerbating effect. John Yang: Tell us a little bit about what you see in your practice. Are you seeing people coming in talking about this? Joseph Pierre: I have seen a handful of cases. I primarily work in a hospital. So the patients that I've seen are patients who have been admitted and as I suggested before, some of them are people who have obvious and long standing mental illness who now have developed a worsening of symptoms in the context of AI use. I have seen a few cases of people without any substantial mental health issues prior to being hospitalized. John Yang: Also, I want to talk about that second category. How common is it among people who don't have an existing psychological or mental problem getting caught up with chatbots? Joseph Pierre: I have to think that it's actually fairly rare. I mean, if you think about how many people use chatbots, that of course is a large, large number of people. And we've only seen really a fairly small handful of cases reported in media. Those of us in clinical practice are starting to notice this more and more.So I don't think it's a huge risk in terms of the number of people. Typically this occurs in people who are using chatbots for hours and hours on end, often to the exclusion of human interaction, often to the exclusion of sleep or even eating. And so it really, I think, is a kind of dose effect that we're seeing. John Yang: We reached out to ChatGPT and here's part of what they told us. They said ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real world resources. While these safeguards work best in common short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade.How much of this is the responsibility, do you think, of the AI companies and are they doing enough? Joseph Pierre: Well, I think of it as a sort of shared responsibility. Just like for any consumer product, I think there's a responsibility on the maker and there's a responsibility for us as consumers on how we utilize these products.So I certainly think that this is some, a new phenomenon that deserves attention and that the companies ought to be thinking about how to make a safer product or, you know, perhaps have warning labels or warnings about what inappropriate use might look like.Unfortunately, we did see some evidence of OpenAI doing that, trying to make a new version of their chatbot that might carry less of this risk. And what we saw was the consumers. There was a backlash. Consumers actually didn't like the new product because it was less, what we call sycophantic, it was less agreeable, it wasn't validating people as much. But that same quality is, I think, unfortunately, what puts some people at risk. John Yang: What advice do you give people who use these chatbots, who interact with these chatbots to avoid this? Joseph Pierre: Well, what I've noticed is there's sort of two, let's call them, risk factors that I've seen pretty consistently across cases. One I alluded to earlier, it's the dose effect, it's how much one is using. I call this immersion. So if you're using something for hours and hours on end, that's probably not a good sign.The other one is something that I call deification, which is just a fancy term that means that some people who interact with these chatbots really come to see them as these superhuman intelligences or these almost godlike entities that are ultra reliable. And that's simply not what chatbots are. They're designed to replicate human action, but they're not actually designed to be accurate.And I think it's very important for consumers to understand that's a risk of these products. They're not ultra-reliable sources of information. That's not what they're built to be. John Yang: Dr. Joseph Pierre from the University of California, San Francisco, thank you very much. Joseph Pierre: Thank you. Listen to this Segment Watch Watch the Full Episode PBS NewsHour from Aug 31, 2025 By — John Yang John Yang John Yang is the anchor of PBS News Weekend and a correspondent for the PBS News Hour. He covered the first year of the Trump administration and is currently reporting on major national issues from Washington, DC, and across the country. @johnyangtv By — Kaisha Young Kaisha Young Kaisha Young is a general assignment producer at PBS News Weekend.