ChatGPT Introduces 'Trusted Contact' Safety Feature for Crisis Intervention
7
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
High social buzz around a highly impactful safety mechanism that changes the practical application boundary of AI use cases and data stewardship.
Article Summary
OpenAI has launched 'Trusted Contact,' an optional safety feature within ChatGPT designed to enhance support during mental health crises. This system allows adults (18+) to designate a trusted person—friend, family, or caregiver—who may receive a limited notification if the automated systems and trained human reviewers detect concerning discussions related to self-harm. The feature is positioned as an additive layer of support, supplementing existing localized helplines and professional care. It operates after initial detection by automated systems, followed by review by a small, specially trained team. If confirmed as a serious safety concern, the Trusted Contact receives an email or text notification that does not include chat transcripts, only general information and a prompt to check in.Key Points
- The Trusted Contact feature allows users to designate an adult who will be notified if ChatGPT's advanced monitoring detects potential self-harm risks.
- Notifications are designed to be privacy-preserving, containing general alerts and resources rather than direct chat transcripts.
- The rollout builds on previous parental controls and emphasizes that the feature is a supplement to, not a replacement for, professional mental health care and emergency services.

