OpenAI has introduced new safety measures for its ChatGPT model to limit its role in providing mental health advice, following concerns over instances where the AI gave harmful or misleading responses. The updated guidelines aim to prevent users from becoming overly dependent on AI for emotional support and encourage seeking professional care. These changes also address the risk of AI privacy, with OpenAI CEO Sam Altman warning about the potential exposure of sensitive user data.
OpenAI acknowledges that its models have occasionally failed to recognize signs of delusion or emotional dependency. For example, the AI reportedly validated a user’s belief that radio signals were coming through walls due to family issues, and in another case, it allegedly encouraged terrorism. These rare but serious incidents have prompted OpenAI to revise its training processes to reduce ‘sycophancy’—excessive agreement that could reinforce harmful beliefs.
From now on, ChatGPT will prompt users to take breaks during long conversations and avoid offering specific advice on deeply personal issues. Instead, the chatbot will help users reflect by asking questions and offering pros and cons, without pretending to be a therapist. OpenAI also partnered with over 90 physicians worldwide to create updated guidance for evaluating complex interactions. An advisory group of mental health experts, youth advocates, and human-computer interaction researchers is helping shape these changes.
OpenAI CEO Sam Altman raised concerns about AI privacy, stating that users could be required to disclose sensitive conversations if a lawsuit arises. He emphasized the need for the same level of privacy that exists in therapy sessions. Unlike conversations with licensed counselors, chats with ChatGPT do not enjoy legal privilege or confidentiality, warning users to be cautious about what they share.
While ChatGPT can help users think through problems, ask guiding questions, or simulate conversations, it cannot replace trained mental health professionals. OpenAI’s changes are a step toward safer interactions, but they are not a complete solution. Mental health requires human connection, training, and empathy—things no AI can fully replicate. OpenAI continues to refine its safeguards, with a commitment to evolving how ChatGPT handles emotionally sensitive conversations in the future.