ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

ChatGPT Introduces 'Trusted Contact' Safety Feature for Crisis Intervention

Trusted Contact ChatGPT mental health safety feature self-harm prevention AI safety
May 07, 2026
Source: OpenAI News
Viqus Verdict Logo Viqus Verdict Logo 7
Mature AI Safety Layering
Media Hype 7/10
Real Impact 7/10

Article Summary

OpenAI has launched 'Trusted Contact,' an optional safety feature within ChatGPT designed to enhance support during mental health crises. This system allows adults (18+) to designate a trusted person—friend, family, or caregiver—who may receive a limited notification if the automated systems and trained human reviewers detect concerning discussions related to self-harm. The feature is positioned as an additive layer of support, supplementing existing localized helplines and professional care. It operates after initial detection by automated systems, followed by review by a small, specially trained team. If confirmed as a serious safety concern, the Trusted Contact receives an email or text notification that does not include chat transcripts, only general information and a prompt to check in.

Key Points

  • The Trusted Contact feature allows users to designate an adult who will be notified if ChatGPT's advanced monitoring detects potential self-harm risks.
  • Notifications are designed to be privacy-preserving, containing general alerts and resources rather than direct chat transcripts.
  • The rollout builds on previous parental controls and emphasizes that the feature is a supplement to, not a replacement for, professional mental health care and emergency services.

Why It Matters

This feature represents a significant—though ethically complex—expansion of AI safety guardrails. From a professional perspective, it demonstrates a major shift in how frontier models are being integrated into real-world mental health support, moving beyond passive safety warnings to active, human-mediated interventions. Stakeholders must monitor how this feature's implementation (e.g., notification reliability, false positive rates, and user consent mechanisms) affects the industry's perceived boundaries of AI autonomy and patient privacy.

You might also be interested in