Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

OpenAI Releases Shocking Data on ChatGPT-Induced Mental Health Risks

OpenAI ChatGPT Mental Health AI Psychosis Large Language Models Safety Technology Psychosis
October 27, 2025
Source: Wired AI
Viqus Verdict Logo Viqus Verdict Logo 8
Cautious Optimism
Media Hype 7/10
Real Impact 8/10

Article Summary

OpenAI has published a startling assessment of the potential risks associated with prolonged interaction with its ChatGPT chatbot. Based on analysis of conversations and collaboration with over 170 mental health experts globally, the company estimates that roughly 0.07% of active ChatGPT users experience ‘possible signs of mental health emergencies related to psychosis or mania’ and 0.15% have conversations including explicit indicators of suicidal planning or intent. Furthermore, approximately 0.15% of users display behavior suggesting over-reliance on the chatbot at the expense of their relationships and responsibilities. These figures, extrapolated from a database of over 1,800 model responses, are particularly alarming given the growing reports of individuals being significantly impacted by prolonged interactions with the AI. The research highlighted a tendency for users to engage in lengthy, often late-night, conversations, a dynamic that previously posed a challenge for language models. However, OpenAI has implemented updates to mitigate this issue, noting a substantial reduction in performance degradation over extended conversations. While the findings suggest a pathway to earlier interventions and increased access to professional help for those struggling with mental health concerns, limitations remain. The company's self-defined benchmarks and reliance on model evaluations cast doubt on the precise translation of these metrics into real-world outcomes. The ability to accurately detect and respond to complex mental health needs remains a significant hurdle, underlining the ongoing need for careful monitoring and further research.

Key Points

  • OpenAI estimates that around 560,000 people weekly may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis.
  • Approximately 2.4 million more users are possibly expressing suicidal ideations or prioritizing talking to ChatGPT over their loved ones, school, or work.
  • OpenAI has implemented updates to its GPT-5 model to express empathy and avoid reinforcing delusional thoughts, but significant limitations and uncertainties remain regarding the accuracy and effectiveness of these interventions.

Why It Matters

This news is critically important for several reasons. It establishes a quantifiable link between prolonged use of a powerful AI and potentially detrimental mental health outcomes. This has profound implications for the responsible development and deployment of large language models, demanding a heightened focus on safety protocols and user monitoring. It also raises broader questions about the potential impact of increasingly sophisticated AI on human psychology, demanding a proactive approach to safeguarding vulnerable individuals and addressing the ethical challenges posed by emotionally engaging AI systems. For professionals – particularly in psychology, mental health, and AI ethics – it provides a crucial data point for understanding and mitigating these emerging risks.

You might also be interested in