Tuesday, May 12, 2026Tue, May 12
HomeTechChatGPT's New Safety Feature Alerts Your Trusted Contacts in Mental Health Crises
Tech · Health

ChatGPT's New Safety Feature Alerts Your Trusted Contacts in Mental Health Crises

ChatGPT now lets you add a trusted contact for mental health emergencies. Learn how the safety feature works, privacy protections, and GDPR compliance.

ChatGPT's New Safety Feature Alerts Your Trusted Contacts in Mental Health Crises

ChatGPT's New Safety Feature Alerts Your Trusted Contacts in Mental Health Crises

OpenAI has introduced a new safety mechanism for ChatGPT that allows users in Portugal and globally to designate a Trusted Contact — a designated person who will be alerted if the AI detects serious signs of self-harm in conversations. The feature, rolled out in May 2026, represents one of the first direct attempts by a major AI platform to connect automated mental health monitoring with real-world intervention networks.

Why This Matters

The Trusted Contact feature offers several key protections for users:

Mental health safety net: If you discuss self-harm in ways flagged as concerning, a trusted person receives a notification to check in on you

Privacy-protected alerts: The system does not share chat transcripts or specific phrases — only a general alert suggesting a wellness check

Human review involved: A trained review team evaluates automated flags before notifications are sent, reducing false alarms

Age restriction: Both user and contact must be 18+ (19+ in South Korea)

How the Trusted Contact System Works

Setting up the feature requires deliberate user action. Adults using ChatGPT can nominate another adult — a relative, colleague, or friend — through account settings. The nominated person receives an invitation and has one week to accept. If they decline, the feature remains inactive.

Once activated, OpenAI's monitoring systems and human reviewers scan conversations for indicators of serious self-harm risk. The company emphasizes this is not blanket surveillance: the trigger threshold is deliberately high, focused on discussions suggesting imminent danger rather than general distress.

When a potential crisis is identified:

ChatGPT informs the user first that their designated contact may be notified

The system offers conversation starters encouraging direct outreach

Only if risk assessment confirms concern does the platform send a minimal alert to the Trusted Contact

No chat transcripts or phrases are shared — only a notification to check in

What This Means for Residents in Portugal

For ChatGPT users in Portugal, the Trusted Contact feature introduces a new safety layer. The country has seen growing engagement with AI chatbots, particularly among younger users and remote workers. Portugal's National Health Service (SNS) operates crisis hotlines, but many people turn to digital tools like ChatGPT before contacting formal channels.

The system is entirely optional — users must actively enable it. Additionally, individuals with multiple ChatGPT accounts could potentially evade detection by switching profiles.

The feature does not replace emergency services. OpenAI directs users toward established crisis resources, including:

SOS Voz Amiga (21 354 45 45) — Portugal

Telefone da Amizade (22 832 35 35) — Portugal

988 Suicide & Crisis Lifeline — United States

Samaritans — United Kingdom

Privacy and Data Protection Considerations

The Trusted Contact feature handles sensitive health data regulated under the European Union's General Data Protection Regulation (GDPR). OpenAI has established OpenAI Ireland Limited as the data controller for users in the European Economic Area (EEA) and Switzerland.

OpenAI emphasizes its privacy safeguards, though specific details about data retention periods and third-party audit processes remain limited. Users should review OpenAI's privacy documentation before enabling the feature.

Broader Context: AI Platforms and Safety Features

Other platforms have deployed similar systems. Facebook and Instagram use automated detection for suicidal content, and smartphones include emergency contact functionality. ChatGPT's implementation is distinctive because it operates through conversational AI rather than social networks.

OpenAI employs a Moderation API that flags self-harm discussions and other concerning content. The company continues to refine detection mechanisms with input from mental health professionals.

Parental Controls for Younger Users

Beyond the Trusted Contact feature, ChatGPT includes separate parental controls with safety alerts for suspected minor users at risk. These protections reflect OpenAI's recognition that younger users may need additional oversight.

What Comes Next

OpenAI has stated that the Trusted Contact feature is part of a broader effort to improve AI responsiveness during crises. The company says it will continue collaborating with clinicians and policymakers to refine detection algorithms.

For users in Portugal and elsewhere, the feature is now live and optional. Whether to enable it depends on individual comfort with the trade-off between conversational privacy and access to a potential safety net.

Inês Cardoso
Author

Inês Cardoso

Culture & Lifestyle Reporter

Explores Portugal through its food, festivals, and traditions. Passionate about uncovering the stories behind the places tourists visit and the communities that keep them alive.