ChatGPT can now alert someone you trust if things get serious. It is a simple feature, but it might be one of the most human ...
OpenAI has introduced a new safety feature to ChatGPT that lets users add "trusted contacts" to their profiles. These ...
The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
When OpenAI's automated monitoring systems detect discussion of self-harm or indicates a serious safety issue, ChatGPT alerts ...
The trusted contact feature expands on ChatGPT’s existing parental safety notifications, which alert parents when a linked ...
OpenAI has introduced a 'Trusted Contact' feature in ChatGPT that can alert a chosen person if conversations suggest serious ...
OpenAI has launched a new ChatGPT feature allowing the AI to alert a trusted friend or family member if it detects serious ...
A new safety feature called ‘Trusted Contact’ comes amid growing concerns about AI pushing people towards mania and psychosis ...
OpenAI has added a new optional safety tool called Trusted Contact to ChatGPT, allowing the platform to alert a user-selected ...
With “Trusted Contact," users can choose their trusted emergency contact, who can be alerted by ChatGPT if any self-harm ...
New safety layer: ChatGPT now lets adults nominate a trusted person to be alerted if serious self-harm concerns arise during conversations. Human review process: Automated detection flags potential ...
OpenAI has launched Trusted Contact, an optional feature for ChatGPT that allows users to designate an adult to be notified during a mental health crisis. If automated systems and human reviewers ...