ChatGPT safety feature: Trusted Contact arrives
OpenAI introduces an optional safety feature called Trusted Contact in ChatGPT, designed to notify a designated person if serious safety concerns surface in a conversation. The feature aligns with broader safety strategies, offering a mechanism for timely human intervention while preserving user privacy and autonomy. The implementation will require clear consent, robust data handling practices, and transparent disclosure about when and how alerts are triggered. For users, this can provide an extra layer of support, particularly in high-stakes conversations around wellbeing, crisis events, or other sensitive topics. For organizations, the feature may serve as a risk-management tool that demonstrates a commitment to responsible AI use and protective workflows in line with regulatory expectations.
In practice, the feature requires thoughtful UX and policy design, including options to customize who is notified, what constitutes a trigger, and how long data remains accessible in the system. It also raises questions about liability and the potential for over-reliance on automation for crisis response. The next phase will likely see more granular controls, improved opt-in experiences, and detailed governance reporting to help stakeholders understand the impact and efficacy of trusted contacts in real-world deployments.