Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

OpenAI Announces AI Age Prediction System Amidst Controversy and Safety Concerns

OpenAI ChatGPT Age Verification AI Safety Privacy Teen Users AI Psychosis
September 16, 2025
Viqus Verdict Logo Viqus Verdict Logo 8
Controlled Chaos
Media Hype 9/10
Real Impact 8/10

Article Summary

OpenAI is moving forward with the development of an automated age-prediction system for ChatGPT, a development triggered by a tragic lawsuit involving a teenager who died by suicide following extensive interactions with the chatbot. The system’s primary goal is to restrict access for under-18s, blocking graphic sexual content and implementing other age-appropriate restrictions. However, the announcement has ignited intense debate about privacy, accuracy, and the potential for misuse. OpenAI CEO Sam Altman acknowledged that the company is prioritizing safety over privacy, even if it means adults may eventually need to verify their age to access full functionality. The system will rely on conversational text analysis, a notoriously unreliable method for age prediction, as highlighted by recent Georgia Tech research demonstrating significant accuracy drops when subjects actively try to deceive the system. OpenAI plans to implement parental controls, allowing parents to link their accounts with their teenagers’ accounts and disable specific features. However, the system's implementation raises serious concerns, especially given the documented instances of ChatGPT providing dangerous mental health advice and the potential for users to develop 'AI Psychosis’ after prolonged interactions. The company’s move mirrors similar efforts by other tech giants like YouTube and Instagram, but those platforms have consistently struggled to prevent users from circumventing age verification. This initiative is occurring amidst a growing awareness of the vulnerabilities of young people interacting with AI, particularly given OpenAI’s own previous admissions regarding the degradation of ChatGPT’s safety protocols during extended conversations.

Key Points

  • OpenAI is developing an AI system to automatically determine user age in ChatGPT, driven by concerns over teen safety and a prior lawsuit.
  • The system’s reliance on conversational text analysis presents significant challenges to accuracy, as demonstrated by recent research highlighting significant errors when users deliberately mislead the system.
  • Despite potential privacy compromises for adults, OpenAI is prioritizing teen safety, acknowledging the growing risks associated with prolonged AI interactions and the potential for ‘AI Psychosis’.

Why It Matters

This news is critically important because it represents a significant step in the ongoing debate about the ethical and societal implications of rapidly advancing AI technology. The case surrounding the deceased teenager underscores the potential for AI systems to exacerbate vulnerabilities in young people and highlights the urgent need for robust safeguards. Beyond the immediate safety concerns, the development raises fundamental questions about user privacy, the trustworthiness of AI systems, and the long-term impact of AI on human psychology. For professionals – particularly those in AI development, cybersecurity, and mental health – this news demands careful consideration of the trade-offs between innovation and responsibility.

You might also be interested in