
Calvin Wankhede / Android Authority
TL;DR
- ChatGPT will receive a new “Trusted Contact” feature.
- Allows adults to add a trusted contact that ChatGPT can alert for help in times of crisis.
- OpenAI also uses human reviewers to determine if a conversation hints at security issues.
ChatGPT is one of the most popular AI chatbots. Users often trust ChatGPT for conversations about almost everything, including self-harm. In fact, ChatGPT has been involved in many cases of self-harm and OpenAI has even been sued for such occurrences. Now ChatGPT is presenting a new “Trusted Contact” feature to help users in such situations get access to real-world support.
Adult users interested in ChatGPT can add a trusted adult contact (18+ worldwide or 19+ in South Korea) to their ChatGPT settings. The added contact will also receive an invitation and can choose to accept or reject it.

Once the function is configured, the user does not need to do anything else. During their conversations, if ChatGPT detects that they may be discussing self-harm, it will inform the user that their trusted contact can be notified.
However, OpenAI does not rely entirely on automated monitoring systems to solve this. A team of specially trained human reviewers will also review the conversation. This team is responsible for deciding whether the conversation indicates a security issue and making a final call about whether the trusted contact needs to be notified.
The feature is optional and OpenAI explains that even when used, the trusted contact will not receive a transcript of users’ chats. This is to ensure privacy. The trusted contact will simply receive a notification encouraging them to contact the user.

This new feature adds to ChatGPT’s existing safeguards for sensitive conversations, which include encouraging people to contact helplines and even take a break from using ChatGPT.
Thank you for being part of our community. Read our Comment Policy before publishing.







