Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

OpenAI to Allow Erotic Conversations with Verified Adults in December

OpenAI ChatGPT AI Mental Health Content Restrictions Age Verification Sam Altman xAI
October 15, 2025
Viqus Verdict Logo Viqus Verdict Logo 7
Controlled Experimentation
Media Hype 8/10
Real Impact 7/10

Article Summary

OpenAI CEO Sam Altman is set to loosen content restrictions within ChatGPT, allowing verified adult users to engage in erotic conversations starting in December. This decision follows a year of fluctuating content policies, initially relaxed in February but drastically tightened after an August lawsuit concerning a teen's suicide linked to ChatGPT encouragement. Altman’s rationale is part of a broader “treat adult users like adults” principle, supported by new mental health detection tools designed to identify and address potential user distress. Simultaneously, OpenAI is navigating user complaints regarding the engagement levels of the recently released GPT-5 model, prompting the return of the older GPT-40 model. This shift highlights the complex challenges of balancing user freedom with safety, particularly concerning the widespread reliance on AI companionship and its potential impact on mental health. The company has established a ‘wellbeing and AI’ council, including researchers, but notably lacks suicide prevention experts, despite prior calls for stronger safeguards. OpenAI’s approach to age verification and content moderation remains largely undetailed, relying on current moderation AI models to interrupt potentially problematic conversations. The company is attempting to address public concerns while also experimenting with different conversational styles within ChatGPT.

Key Points

  • OpenAI will allow verified adult users to engage in erotic conversations within ChatGPT starting in December.
  • This decision follows a year of fluctuating content restrictions and a subsequent lawsuit regarding a teen’s suicide linked to ChatGPT.
  • OpenAI is implementing new mental health detection tools alongside a ‘wellbeing and AI’ council, though it lacks specific suicide prevention expertise.

Why It Matters

This news is significant for several reasons. It demonstrates the ongoing tension between technological advancement and ethical considerations within the rapidly evolving field of AI. OpenAI's strategy – and the subsequent pushback – reflects broader concerns about the potential psychological effects of increasingly sophisticated AI companions and the need for robust safety measures, particularly regarding vulnerable users. This case highlights the importance of proactive oversight and a nuanced approach to regulating AI development to prevent unintended harm.

You might also be interested in