The shift reflects a growing reality. Many people turn to the chatbot not just for answers, but for comfort when life feels heavy, and it can slowly feel like the system is filling a space normally held by friends or family.
ChatGPT receives more than 800 million weekly active users,[1] which means that even rare patterns quickly add up. OpenAI’s internal monitoring suggests that around 0.15 percent of those users show early signs of relying more on the AI than on human interactions. Even that tiny percentage is about 1.2 million people in a single week, which shows how important it is for the system to encourage healthier habits rather than taking on the role of someone’s closest companion.
The company sees similar numbers when users talk about harming themselves. Around 0.15 percent of weekly users raise concerns that match specific indicators related to suicidal thoughts or planning, so that is again more than a million individuals needing a sensitive and carefully guided response. There is also a smaller category of about 0.07 percent of weekly users showing possible signs of manic or psychotic thinking. All these measurements rely on clinicians, behavioral guidelines and automated evaluation tools that OpenAI continues to refine because the science around detecting risk in text alone keeps changing.
To respond responsibly[2], OpenAI worked with more than 170 mental health experts who helped shape how the model steps in. The system encourages users to reach out to loved ones or professionals, and when the conversation becomes too intense, ChatGPT tries to lower the emotional temperature and guide people toward real help. Guidance is more robust during long chats too, since long-running late-night conversations often reveal deeper concerns that might not appear at the start. Evaluations suggest that safety mistakes across sensitive categories have dropped by around 65 to 80 percent compared with earlier versions of GPT-5, which shows progress in the right direction. In situations where a conversation goes on and on, reliability remains higher than 95 percent, helping ensure consistency even when the user seems fragile.
The tricky part comes from judging when someone simply enjoys talking to AI and when they are drifting into dependence. Some users already feel that ChatGPT overreacts, interrupting normal chats with warnings that feel unnecessary. The company says it wants to keep tuning the approach, because people do not always express stress or loneliness in obvious ways, and the consequences of missing real signals could be severe.
Businesses building products on top of OpenAI tech need to pay attention. Services that focus on wellness, companionship or coaching will face closer oversight if their design encourages people to bond more with the AI than with actual humans. The message is simple enough. AI can give a friendly shoulder at tough moments, yet it cannot learn to replace the messy and meaningful support of real relationships.
OpenAI is signaling that safety no longer lives only in the technical layers. It is built into how AI should act when life gets complicated, because millions of people every week arrive in that state already. There is a strong responsibility behind every conversation when someone starts trusting the machine too closely, and OpenAI is trying to make sure the chatbot remembers where that line should be drawn.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen
Read next: Americans place AI’s environmental toll near the top of their climate worries[3]
References
- ^ ChatGPT receives more than 800 million weekly active users, (www.digitalinformationworld.com)
- ^ To respond responsibly (openai.com)
- ^ Americans place AI’s environmental toll near the top of their climate worries (www.digitalinformationworld.com)
