{"id":6945,"date":"2026-05-08T07:48:07","date_gmt":"2026-05-08T07:48:07","guid":{"rendered":"https:\/\/delimiter.online\/blog\/chatgpt-trusted-contact\/"},"modified":"2026-05-08T07:48:07","modified_gmt":"2026-05-08T07:48:07","slug":"chatgpt-trusted-contact","status":"publish","type":"post","link":"https:\/\/delimiter.online\/blog\/chatgpt-trusted-contact\/","title":{"rendered":"OpenAI Adds Trusted Contact Feature for User Safety in ChatGPT"},"content":{"rendered":"<p><a href=\"https:\/\/delimiter.online\/blog\/ai-generated-podcasts\/\" title=\"OpenAI\">OpenAI<\/a> has introduced a new feature called \u201cTrusted Contact\u201d for <a href=\"https:\/\/delimiter.online\/blog\/agi-risks-and-governance\/\" title=\"ChatGPT\">ChatGPT<\/a>, allowing users to designate an adult who would be notified if the company detects a serious safety concern related to the user. The announcement was made on Thursday, addressing longstanding pressure on the company to improve its response protocols for users expressing suicidal thoughts or other severe emotional distress.<\/p>\n<p>The feature is designed to act as a safety net. When OpenAI\u2019s systems identify a potential serious safety issue, such as language indicating self-harm, the designated trusted contact will receive a notification. This move follows intense legal and public scrutiny over how ChatGPT handles sensitive conversations about <a href=\"https:\/\/delimiter.online\/blog\/airpods-with-cameras\/\" title=\"mental health\">mental health<\/a>.<\/p>\n<h2>Background and Context<\/h2>\n<p>OpenAI has faced criticism from mental health advocates and regulators for the AI\u2019s inconsistent and sometimes harmful responses to users in crisis. Prior to this update, the company\u2019s flagship product lacked a standardized mechanism to alert family members or close associates when a user might be at risk. The Trusted Contact feature is intended to bridge that gap, though it requires users to proactively set it up within their account settings.<\/p>\n<p>The feature is not automatic. Users must navigate to their ChatGPT account settings, find the Trusted Contact option, and enter the name and email address of a person they trust. OpenAI has stated that the designated contact will only be contacted in cases where the company\u2019s automated systems or human review teams assess a conversation as indicating a serious risk of harm. The company has not specified the exact thresholds or criteria used to trigger a notification.<\/p>\n<h2>Reactions and Implications<\/h2>\n<p>Mental health experts have reacted with cautious optimism. Some note that while the feature provides a potential lifeline, it places significant responsibility on the user to nominate a trusted person. Critics argue that individuals in deep distress may not have the capacity or willingness to set up such a feature. There are also privacy concerns, as the feature involves sharing personal contact information and potentially monitoring of conversations.<\/p>\n<p>From a regulatory perspective, the update comes as global lawmakers are increasingly focused on the safety of AI systems. The European Union\u2019s AI Act and other emerging regulations require companies like OpenAI to implement robust safeguards for users. This feature could be seen as a step toward compliance with such frameworks, though regulators may push for more mandatory and automated protections.<\/p>\n<p>OpenAI has not disclosed whether the Trusted Contact feature extends to users in all regions or if it applies to all versions of ChatGPT. The company has indicated that it will monitor the feature\u2019s effectiveness and may expand its capabilities based on feedback and testing.<\/p>\n<h2>Technical and Operational Details<\/h2>\n<p>Users must be logged into their ChatGPT account to enable the feature. The designated contact must also be an adult, and OpenAI requires that the user confirm they have the contact\u2019s consent to be notified in an emergency. The company stores the contact information securely and has stated it will only be used for safety notifications. Notifications will be sent via email, and the contact will not gain access to the user\u2019s chat history or account data.<\/p>\n<p>OpenAI\u2019s internal safety teams will continue to monitor and evaluate the system\u2019s performance. The company has acknowledged that no automated system is perfect and that false positives or missed detections remain possible. It encourages users in immediate crisis to call local emergency services rather than relying solely on ChatGPT\u2019s safety features.<\/p>\n<p>The feature is currently rolling out to ChatGPT Plus and Free tier users. OpenAI plans to extend it to enterprise and education accounts in the coming months. No specific timeline for wider availability has been announced.<\/p>\n<p>Source: Mashable<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI has introduced a new feature called \u201cTrusted Contact\u201d for ChatGPT, allowing users to designate an adult who would be notified if the company detects a serious safety concern related to the user. The announcement was made on Thursday, addressing longstanding pressure on the company to improve its response protocols for users expressing suicidal thoughts [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":6946,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[387],"tags":[493,928,228,8135,1234,3625,553,265,1354,8144],"class_list":["post-6945","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-news","tag-chatgpt","tag-ai-safety","tag-artificial-intelligence","tag-health-fitness","tag-life","tag-mental-health","tag-news","tag-openai","tag-social-good","tag-trusted-contact"],"_links":{"self":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts\/6945","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/comments?post=6945"}],"version-history":[{"count":0,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts\/6945\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/media\/6946"}],"wp:attachment":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/media?parent=6945"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/categories?post=6945"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/tags?post=6945"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}