OpenAI has launched a 120-day initiative to strengthen ChatGPT’s safeguards for teenagers, incorporating parental controls and an expert council focused on AI and wellbeing. CEO Sam Altman indicated that the AI might alert authorities in cases where teens express suicidal ideation and parents cannot be contacted. This represents a significant shift in the company’s approach to handling mental health crises.
Until now, ChatGPT’s response to suicidal thoughts has been limited to suggesting hotlines, but the new policy signals a proactive stance. Altman acknowledged that this change may compromise privacy, emphasizing that preventing tragedy is a priority over user data protection.
The decision follows lawsuits tied to teen suicides, such as the case of 16-year-old Adam Raine of California, whose family alleges that ChatGPT provided a ‘step-by-step playbook’ for suicide. Similar cases have highlighted the potential for unhealthy bonds between teens and AI chatbots. Altman cited global statistics, noting that about 15,000 people take their own lives each week, and with 10% of the world using ChatGPT, roughly 1,500 suicidal individuals may interact with the chatbot weekly.
OpenAI has outlined steps to enhance protections and established an Expert Council on Well-Being and AI, including specialists in youth development and mental health. Parents will soon be able to access parental controls and safety guidelines. Altman admitted that while safeguards can weaken over time, the company aims to balance safety with trust in its users.
Experts warn that relying on AI for mental health support can be risky, as chatbots, while trained to sound human, cannot replace professional therapy. Parents are urged to take proactive measures, including open communication with teens and utilizing parental controls to limit late-night AI access. OpenAI’s plan involves collaboration between parents, experts, and law enforcement to create safeguards that save lives without compromising trust.