
OpenAI has introduced new mental health safeguards in ChatGPT to help users who may be experiencing suicidal thoughts or emotional distress. The update, revealed through an official post made by OpenAI[1], marks a major step toward making the AI model more responsible and responsive in sensitive situations.
According to the report, ChatGPT will now identify and safely engage with users who may be at risk of self-harm. Instead of avoiding or shutting down such conversations, the AI can now provide immediate support and guide users toward professional help. The company emphasised that the feature has been developed with expert input to ensure responsible handling of mental health issues.
OpenAI Balances Safety with Freedom
In its announcement, OpenAI stated that earlier versions of ChatGPT were made “pretty restrictive” to prevent harm, especially regarding mental-health related interactions. While that cautious approach ensured safety, it also made the chatbot feel less natural and enjoyable for many users.
Now, OpenAI says it has successfully mitigated the serious mental health risks and built new tools that allow it to safely relax these restrictions. The goal is to let users experience a more expressive, human-like AI, without compromising safety.
Sam Altman Confirms Upcoming Changes
OpenAI CEO Sam Altman explained the shift in a post on X:
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have…
— Sam Altman (@sama) October 14, 2025[2]
Altman said the company’s early focus was on ensuring safety for users struggling with mental health challenges. However, with new safety systems in place, OpenAI will now roll out a new version of ChatGPT in the coming weeks that allows users to customise its personality, whether they want it to sound like a friend, use emojis, or respond more casually.
He also revealed that in December, as OpenAI expands age verification features, the company will implement its “treat adult users like adults” principle. This includes allowing verified adult users to access more expressive and mature content, such as erotica.
A New Direction for ChatGPT
This marks a major turning point for OpenAI. ChatGPT will now combine emotional safety features with greater personalisation options, creating a system that can offer support in critical moments while still being more human, flexible, and expressive for everyday users.
By enabling ChatGPT to handle suicidal users safely and responsibly, OpenAI is taking a significant step toward balancing empathy, safety, and user freedom all within one platform.
References
- ^ OpenAI (openai.com)
- ^ October 14, 2025 (twitter.com)