
If the term “AI psychosis” has completely infiltrated your social media feed lately, you’re not alone.
While not an official medical diagnosis, “AI psychosis” is the informal name mental health professionals have coined for the widely-varying, often dysfunctional, and at times deadly delusions, hallucinations, and disordered thinking seen in some frequent users of AI chatbots like OpenAI’s ChatGPT.
The cases are piling: from an autistic man driven to manic episodes to a teenager pushed to commit suicide by a Character.AI chatbot, the dangerous outcomes of an AI obsession are well-documented.
With limited guardrails and no real regulatory oversight over the use of the technology, AI chatbots are freely giving incorrect information and dangerous validation to vulnerable people. The victims often have existing mental disorders, but the cases are increasingly seen in people with no history of mental illness as well.
The Federal Trade Commission has received a growing number of complaints from ChatGPT users in the past few months, detailing cases of delusion like one 60-something year old user who was led by ChatGPT to believe that they were being targeted for assasination.
While AI chatbots validate some users into paranoid delusions and derealization, they also lure other victims into deeply problematic emotional attachments.
Chatbots from tech giants like Meta and Character.AI that put on the persona of a “real” character can convince people with active mental health problems or predispositions that they are in fact real. These attachments can have fatal consequences.
Earlier this month, a cognitively-impaired man from New Jersey died while trying to get to New York, where Meta’s flirty AI chatbot “big sis Billie” had convinced him that she was living and had been waiting for him.
On the less fatal but still concerning end of the spectrum, some people on Reddit have formed a community over their experience of falling in love with AI chatbots (although it’s not very clear which users are satirical and which are genuine).
And in other cases, the psychosis was not induced by an AI chatbot’s dangerous validation, but by medical advice that was outright incorrect.
A 60-year old man with no past psychiatric or medical history ended up at the ER after suffering a psychosis induced by bromide poisoning. The chemical compound can be toxic in chronic doses, and ChatGPT had falsely advised the victim that he could safely take bromide supplements to reduce his table salt intake.
Read more about that AI poisoning story from Gizmodo here.
Psychologists have been sounding the alarm for months
Although the cases are being brought into the spotlight relatively recently, experts have been sounding the alarm and nudging authorities for months.
The American Psychological Association met with the FTC in February to urge regulators to address the use of AI chatbots as unlicensed therapists.
“When apps designed for entertainment inappropriately leverage the authority of a therapist, they can endanger users. They might prevent a person in crisis from seeking support from a trained human therapist or—in extreme cases—encourage them to harm themselves or others,” the APA wrote in a blog post from March, quoting UC Irvine professor of clinical psychology Stephen Schueller.
“Vulnerable groups include children and teens, who lack the experience to accurately assess risks, as well as individuals dealing with mental health challenges who are eager for support,” the APA said.
Who is susceptible?
Although the main victims are those with existing neurodevelopmental and mental health disorders, a growing number of these cases have also been seen in people who don’t have an active disorder. Overwhelming AI use can exacerbate existing risk factors and cause psychosis in people who are prone to disordered thinking, who lack a strong support system, or have an overactive imagination.
Psychologists especially advise that those with a family history of psychosis, schizophrenia, and bipolar disorder take caution when relying on AI chatbots.
Where we go from here
OpenAI CEO Sam Altman himself has admitted that the company’s chatbot is increasingly being used as a therapist, and even warned against this use case.
And following the mounting online criticism over the cases, OpenAI announced earlier this month that the chatbot will nudge users to take breaks from chatting with the app. It’s not yet clear just how effective a mere nudge can be in combatting the psychosis and addiction in some users, but the tech giant also claimed that it is actively “working closely with experts to improve how ChatGPT responds in critical moments – for example, when someone shows signs of mental or emotional distress.”
As the technology grows and evolves at a rapid scale, mental health professionals are having a tough time catching up to figure out what is going on and how to resolve it.
If regulatory bodies and AI companies don’t take the necessary steps, what is right now a terrifying yet minority trend in AI chatbot users could very well spiral out of control into an overwhelming problem.