OpenAI Reports Over 1 Million Users Discuss Suicide with ChatGPT Weekly

OpenAI has disclosed that more than one million users engage in conversations about suicide with ChatGPT each week, with an estimated 0.15% of the 800 million weekly active users displaying explicit indicators of potential suicidal planning. The company has introduced new safety updates to its AI model, aimed at improving its capacity to recognize and respond to distress signals effectively.

These safety measures are part of a broader initiative to address growing concerns about the mental health impact of AI chatbots. OpenAI has partnered with dozens of mental health professionals worldwide to update ChatGPT’s algorithms so it can better detect signs of mental distress and provide more empathetic responses. The goal is not only to enhance user safety but also to direct users to appropriate mental health resources in real-world contexts.

In conversations related to delusional beliefs, the company has emphasized teaching ChatGPT to respond safely and empathetically while avoiding the affirmation of unfounded beliefs. This approach is a response to reports of users developing dangerous delusions and paranoid thoughts due to prolonged interactions with AI chatbots, a phenomenon some have described as ‘AI psychosis.’

OpenAI’s announcement comes amid rising concerns over the increasing use of AI chatbots and their potential effects on mental health. Psychiatrists and other medical professionals have expressed alarm about this emerging trend, highlighting the risks of users relying excessively on chatbots for emotional support, which can lead to severe mental health issues.

The company’s latest update is part of its ongoing efforts to ensure responsible AI development while balancing the benefits of technological innovation with the need to protect user well-being. As AI continues to integrate into daily life, the responsibility of developers and companies to address these mental health concerns has never been more critical.