Over half a million ChatGPT[1] users exhibit signs of mania, psychosis or suicidal thoughts every week, according to OpenAI.
In a recent blog post[2], the AI giant warned that 0.07 per cent of its weekly users showed signs of serious mental health emergencies.
While this figure might sound small, with over 800 million weekly users according to CEO Sam Altman[3], that adds up to 560,000 users.
Meanwhile, 1.2 million users – 0.15 per cent – send messages that contain ‘explicit indicators of potential suicidal planning or intent’ each week.
Likewise, OpenAI warns that more than one million weekly users display signs of ‘exclusive attachment to the model’.
The company warns that this emotional attachment frequently comes ‘at the expense of real–world relationships, their well–being, or obligations’.
In the face of mounting scrutiny, the company says it has created a panel of over 170 mental health experts to help the AI respond more appropriately to signs of mental health issues.
However, Dr Hamilton Morrin, a psychiatrist from King’s College London, told Daily Mail: ‘It’s encouraging to see companies like OpenAI working with clinicians and researchers to try to improve the safety of their models, but the problem is likely far from solved.’

Over half a million ChatGPT users exhibit signs of mania, psychosis or suicidal thoughts every week, according to OpenAI. A further 1.2 million users send messages that contain ‘explicit indicators of potential suicidal planning or intent’ each week (stock image)

This comes as OpenAI faces a lawsuit from the family of Adam Raine (pictured), a teenage boy who died by suicide after months of conversations with the chatbot
This update comes amid increasing concern that AI chatbots might be harming the mental health of their users.
Most notably, OpenAI is currently being sued by the family of Adam Raine[4], a teenage boy who died by suicide after months of conversations with the chatbot.
Similarly, prosecutors in a murder–suicide which took place in Greenwich, Connecticut, suggest that ChatGPT had fuelled the alleged perpetrator’s delusions.
OpenAI says that it has now trained its models to provide better responses to conversations that show signs of mental health issues or delusions.
In their blog post, the company wrote: ‘Our new automated evaluations score the new GPT‑5 model at 91% compliant with our desired behaviors, compared to 77% for the previous GPT‑5 model.’
A spokesperson for OpenAI also told DailyMail that sensitive conversations were difficult to detect and measure, adding that the numbers may change significantly as more research is carried out.
However, experts suggest that the sheer volume of users exhibiting signs of mental health crises is concerning.
Dr Thomas Pollak, a consultant neuropsychiatrist from South London and Maudsley NHS Foundation Trust, told Daily Mail: ‘OpenAI’s report that 0.07% of users show possible signs of mania, psychosis or suicidal thinking should be taken seriously, although it’s important to interpret it cautiously.

OpenAI says it has now improved the chatbot’s ability to respond to ‘sensitive messages’, including those that show signs of mania or psychosis
‘With 800 million weekly users, even a small percentage represents a very large number of people.’
What isn’t yet clear is whether this simply represents mental health trends in the general population, or whether ChatGPT is causing its users crises.
Scientists say that there isn’t currently enough data to conclusively prove whether chatbots cause poor mental health.
However, Dr Pollak says there is growing evidence that chatbots can amplify certain tendencies.
For example, AI bots have been shown to reinforce delusional or grandiose ideas through over–personalised or supportive responses.
Dr Pollak says: ‘This may not mean that the technology causes illness, but rather that in some cases it might act as a catalyst or amplifier in vulnerable individuals, much as social media or cannabis can do in other contexts.’
On the other hand, OpenAI insists that there is no causal link between poor mental health and using its services.
The blog post states: ‘Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations.’

This comes after OpenAI CEO Sam Altman (pictured) said the company would begin relaxing restrictions on customers using ChatGPT for mental health support
Likewise, the company also believes that its tools are now able to help users who might be struggling with their mental health.
OpenAI says that it has now built a series of responses in ChatGPT that encourage users to seek help in the real world.
At the same time, Sam Altman would ‘safely relax’ the restriction on users turning to the chatbot for mental health support.
Read More
Albania’s digitally-created ‘Minister for AI’ is ‘pregnant with 83 children’, PM says

In a post on X earlier this month, Mr Altman wrote: ‘We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues.
‘We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.’
Mr Altman added that the company now had ‘new tools’ which would allow users to start using ChatGPT for mental health purposes.
In the same post, Mr Altman also announced that adult ChatGPT users would now be able to create AI–generated erotica.[5]
References
- ^ ChatGPT (www.dailymail.co.uk)
- ^ blog post (openai.com)
- ^ Sam Altman (www.dailymail.co.uk)
- ^ OpenAI is currently being sued by the family of Adam Raine (www.dailymail.co.uk)
- ^ would now be able to create AI–generated erotica. (www.dailymail.co.uk)
- ^ HALF OF CURRENT JOBS WILL BE LOST TO AI WITHIN 15 YEARS (www.dailymail.co.uk)
- ^ China (www.dailymail.co.uk)