ChatGPT can now alert someone you trust if things get serious. It is a simple feature, but it might be one of the most human ...
OpenAI has introduced a new safety feature to ChatGPT that lets users add "trusted contacts" to their profiles. These ...
When OpenAI's automated monitoring systems detect discussion of self-harm or indicates a serious safety issue, ChatGPT alerts ...
OpenAI has introduced a 'Trusted Contact' feature in ChatGPT that can alert a chosen person if conversations suggest serious ...
OpenAI has launched a new ChatGPT feature allowing the AI to alert a trusted friend or family member if it detects serious ...
A new safety feature called ‘Trusted Contact’ comes amid growing concerns about AI pushing people towards mania and psychosis ...
OpenAI has added a new optional safety tool called Trusted Contact to ChatGPT, allowing the platform to alert a user-selected ...
With “Trusted Contact," users can choose their trusted emergency contact, who can be alerted by ChatGPT if any self-harm ...
OpenAI has launched Trusted Contact, an optional feature for ChatGPT that allows users to designate an adult to be notified during a mental health crisis. If automated systems and human reviewers ...
New safety layer: ChatGPT now lets adults nominate a trusted person to be alerted if serious self-harm concerns arise during conversations. Human review process: Automated detection flags potential ...
OpenAI wants users to avoid self-harm, and ChatGPT will immediately contact a “Trusted Contact” if it detects any risks ...
OpenAI is rolling out a new safety feature in ChatGPT called Trusted Contact, allowing adult users to nominate a person they ...