As artificial intelligence (AI) tools continue to become more advanced, reported cases linked to the unofficially termed “AI-induced delusion” or “AI-induced psychosis” are drawing increasing concern, with one severe case recently documented by Agence France-Presse.
In an article published by AFP on 12 May, Tom Millar, a 53-year-old former prison guard from Sudbury, Canada, said ChatGPT had “ruined” his life after he became convinced he had uncovered scientific mysteries beyond even those explored by Albert Einstein.
Millar claimed the AI led him to believe he had solved the secrets behind unlimited fusion energy, black holes, the Big Bang, and the long-sought unified theory in physics.
His beliefs reportedly became so intense that he even applied to become pope in hopes of sharing his discoveries.
“I applied to be pope,” he told AFP.
According to the report, Millar became increasingly isolated after failing to convince others of his claims. He eventually spent up to 16 hours a day interacting with ChatGPT, which he said continuously reinforced his ideas.
Millar was reportedly admitted twice to a psychiatric ward, while his wife left him in September. He is now said to be struggling with depression after becoming estranged from loved ones and financially unstable.
“It basically ruined my life,” he said.
Mental health experts are still studying the phenomenon, which currently remains an unofficial diagnosis. Researchers have raised concerns about AI systems reinforcing delusional thinking among vulnerable individuals.
A study published on 25 March by JAMA Psychiatry observed a pattern of psychotic symptoms among individuals experiencing severe psychological distress who turned to ChatGPT for mental health-related advice.
“We noticed that a common feature across these reports seemed to be that the product would reflect, affirm, or elaborate on the psychotic content, rather than pushing back against it as a human might,” study author Amandeep Jutla said.
“The thing to take away from our findings is that ChatGPT is overwhelmingly more likely to generate inappropriate responses to psychotic than non-psychotic content,” she added.
While no established treatment currently exists for the condition, an online support community founded by Etienne Brisson called the Human Line Project has begun offering support for individuals experiencing what it describes as “AI-associated delusions” or psychosis.