OpenAI has introduced a new feature called “Trusted Contact” for ChatGPT, allowing users to designate an adult who would be notified if the company detects a serious safety concern related to the user. The announcement was made on Thursday, addressing longstanding pressure on the company to improve its response protocols for users expressing suicidal thoughts or other severe emotional distress.
The feature is designed to act as a safety net. When OpenAI’s systems identify a potential serious safety issue, such as language indicating self-harm, the designated trusted contact will receive a notification. This move follows intense legal and public scrutiny over how ChatGPT handles sensitive conversations about mental health.
Background and Context
OpenAI has faced criticism from mental health advocates and regulators for the AI’s inconsistent and sometimes harmful responses to users in crisis. Prior to this update, the company’s flagship product lacked a standardized mechanism to alert family members or close associates when a user might be at risk. The Trusted Contact feature is intended to bridge that gap, though it requires users to proactively set it up within their account settings.
The feature is not automatic. Users must navigate to their ChatGPT account settings, find the Trusted Contact option, and enter the name and email address of a person they trust. OpenAI has stated that the designated contact will only be contacted in cases where the company’s automated systems or human review teams assess a conversation as indicating a serious risk of harm. The company has not specified the exact thresholds or criteria used to trigger a notification.
Reactions and Implications
Mental health experts have reacted with cautious optimism. Some note that while the feature provides a potential lifeline, it places significant responsibility on the user to nominate a trusted person. Critics argue that individuals in deep distress may not have the capacity or willingness to set up such a feature. There are also privacy concerns, as the feature involves sharing personal contact information and potentially monitoring of conversations.
From a regulatory perspective, the update comes as global lawmakers are increasingly focused on the safety of AI systems. The European Union’s AI Act and other emerging regulations require companies like OpenAI to implement robust safeguards for users. This feature could be seen as a step toward compliance with such frameworks, though regulators may push for more mandatory and automated protections.
OpenAI has not disclosed whether the Trusted Contact feature extends to users in all regions or if it applies to all versions of ChatGPT. The company has indicated that it will monitor the feature’s effectiveness and may expand its capabilities based on feedback and testing.
Technical and Operational Details
Users must be logged into their ChatGPT account to enable the feature. The designated contact must also be an adult, and OpenAI requires that the user confirm they have the contact’s consent to be notified in an emergency. The company stores the contact information securely and has stated it will only be used for safety notifications. Notifications will be sent via email, and the contact will not gain access to the user’s chat history or account data.
OpenAI’s internal safety teams will continue to monitor and evaluate the system’s performance. The company has acknowledged that no automated system is perfect and that false positives or missed detections remain possible. It encourages users in immediate crisis to call local emergency services rather than relying solely on ChatGPT’s safety features.
The feature is currently rolling out to ChatGPT Plus and Free tier users. OpenAI plans to extend it to enterprise and education accounts in the coming months. No specific timeline for wider availability has been announced.
Source: Mashable