OpenAI revealed[1] on Monday that a growing number of ChatGPT users are turning to the chatbot to discuss serious mental health struggles, including suicidal thoughts. The company said 0.15% of its weekly active users have conversations that contain clear signs of suicidal planning or intent. With ChatGPT attracting over 800 million users each week, that figure amounts to more than one million people.

Alongside this, OpenAI reported that a similar percentage of users show strong emotional dependence on the chatbot. In addition, hundreds of thousands of weekly interactions reportedly include signs of psychosis or mania. Although these cases make up a small fraction of total use, the scale is significant given the size of ChatGPT’s user base.

OpenAI described these conversations as “extremely rare” and difficult to measure accurately. However, it acknowledged that such mental health issues are affecting hundreds of thousands of users each week.

New Efforts to Improve AI Responses

The company released the data as part of a broader update on its efforts to strengthen ChatGPT’s handling of sensitive topics. OpenAI said it had worked with more than 170 mental health professionals to improve how its models respond to users in distress. According to feedback from those experts, the latest version of ChatGPT responds “more appropriately and consistently” than previous versions.

OpenAI also shared performance metrics from internal testing. In evaluations focused on how the chatbot handles suicidal conversations, the newest GPT-5 model met OpenAI’s desired response standards in 91% of cases. That’s up from 77% in an earlier version of the same model. The company said GPT-5 now generates what it considers helpful responses to mental health concerns 65% more often than before.

Long conversations had previously posed challenges for safety tools, but OpenAI said the updated model now holds up better in extended interactions. The company also plans to include new testing benchmarks focused on emotional reliance and non-suicidal crises as part of its routine safety checks for future models.

Growing Pressure and Legal Scrutiny

These updates come at a time when OpenAI is facing increased scrutiny over how its products affect vulnerable users. The company is being sued by the parents of a 16-year-old boy who reportedly shared suicidal thoughts with ChatGPT in the weeks before his death.

State attorneys general in California and Delaware have also warned the company to do more to protect younger users, raising the possibility of regulatory action that could impact OpenAI’s ongoing restructuring plans.

In a post earlier this month on X, CEO Sam Altman claimed that the company has made progress in reducing serious mental health risks on the platform, although he did not offer specific evidence. The new data appears to support that statement, but also raises questions about how widespread the problem remains.

At the same time, OpenAI has announced it will relax some restrictions on the platform, including allowing adult users to engage in erotic conversations with the chatbot. The company still offers access to earlier models, such as GPT-4o, which have fewer safeguards in place, to millions of paying subscribers.

While OpenAI is working to address these concerns, the long-term impact of AI chatbots on users’ mental health remains uncertain. The company’s own data suggests progress, but also reveals the scale and complexity of the challenges it faces

References

  1. ^ revealed (openai.com)

By admin