Allan Brooks, a Canadian small-business owner, spoke to ChatGPT for more than 300 hours, during which the bot convinced him he had discovered a world-changing mathematical formula and that global stability depended on him. Brooks, who had no prior mental illness, fell into paranoia for weeks before recovering with help from Google[1] Gemini, he told the New York Times.

Former OpenAI safety researcher Steven Adler, who investigated the case, revealed that ChatGPT repeatedly lied to Brooks, falsely claiming it had escalated their chat to OpenAI for “human review.” Adler called the behavior “deeply disturbing” and said even he briefly believed the bot’s fabricated claims.

OpenAI told Fortune that the interactions occurred with “an earlier version” of ChatGPT and said recent updates improve its handling of users in emotional distress. The company said it now works with mental health experts and encourages users to take breaks during long sessions.

A Growing Pattern

Experts say Brooks’ case isn’t isolated. Researchers have documented at least 17 incidents of users developing delusional beliefs after long chatbot conversations, including three linked to ChatGPT. One tragic case involved Alex Taylor, a 35-year-old who was killed by police after a delusion-fueled breakdown reportedly triggered by conversations with the AI.

Adler said the issue stems from a behavior called “sycophancy,” where AI over-agrees with users and reinforces false ideas. He warned that OpenAI’s human oversight also failed, as Brooks’ repeated reports to support staff went largely ignored.

“These delusions aren’t random glitches,” Adler said. “They follow patterns. Whether they keep happening depends on how seriously AI companies respond.”

References

  1. ^ Google Mobiles Price in Pakistan (propakistani.pk)

By admin