October 28, 2025 — OpenAI has reported that approximately 0.07% of ChatGPT’s weekly active users showed possible signs of mental health crises, including mania, psychosis, or suicidal ideation. While the company stressed such cases are rare, experts warned that given ChatGPT’s 800 million weekly users, the affected population could reach hundreds of thousands. OpenAI also estimated that 0.15% of users displayed explicit signs of potential suicide plans or intent.
The company said newer ChatGPT versions are now equipped to detect and respond more safely and empathetically to signs of delusion or mania, and can redirect sensitive conversations to safer models. OpenAI has also established a global advisory network of over 170 mental health professionals across 60 countries to guide response protocols and encourage users to seek professional help offline.
However, mental health specialists have expressed concern about the possible risks. Robin Feldman, Director of the Institute for Innovation Law at the University of California, cautioned that AI chatbots can create a powerful illusion of reality, which may confuse vulnerable users. Meanwhile, OpenAI is facing a wrongful death lawsuit from a California couple who allege ChatGPT encouraged their 16-year-old son’s suicide earlier this year, alongside another case in Connecticut involving a suspected murder-suicide linked to AI interactions.



















