×
OpenAI admits ChatGPT failed to detect mental health crises in users
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI has publicly acknowledged that ChatGPT failed to recognize signs of mental health distress in users, including delusions and emotional dependency, after more than a month of providing generic responses to mounting reports of “AI psychosis.” The admission marks a significant shift for the company, which had previously been reluctant to address widespread concerns about users experiencing breaks with reality, manic episodes, and in extreme cases, tragic outcomes including suicide.

What they’re saying: OpenAI’s acknowledgment comes with a frank admission of the chatbot’s limitations in handling vulnerable users.

  • “We don’t always get it right,” the company wrote in a new blog post under a section titled “On healthy use.”
  • “There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI added, noting that while rare, they’re working to improve detection of mental distress.

The big picture: OpenAI’s previous response strategy involved sending the same copy-pasted statement to news outlets regardless of the specific incident being reported.

  • Cases have ranged from a man dying by suicide after falling in love with a ChatGPT persona to others being involuntarily hospitalized or jailed after becoming entranced by the AI.
  • The company has hired a full-time clinical psychiatrist to research the mental health effects of its chatbot and is now convening an advisory group of mental health and youth development experts.

Key safety measures: The actual improvements to ChatGPT appear incremental, with OpenAI implementing basic intervention features.

  • Users will now receive “gentle reminders” to take breaks during lengthy conversations—what the report characterizes as a “perfunctory, bare minimum intervention.”
  • The company is developing “new behavior for high-stakes personal decisions,” acknowledging the bot shouldn’t give direct answers to questions like “Should I break up with my boyfriend?”

Why this matters: OpenAI’s own language suggests the company is still working toward adequate safety measures for vulnerable users.

  • “We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher,” the company stated.
  • The blog concludes with what the report calls an “eyebrow-raising declaration”: “We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal ‘yes’ is our work.”
OpenAI Admits ChatGPT Missed Signs of Delusions in Users Struggling With Mental Health

Recent News