Hundreds of millions of people chat with OpenAI’s ChatGPT and other artificial intelligence chatbots each week, but there is growing concern that spending hours with the tools can lead some people toward potentially harmful beliefs.
Reports of people apparently losing touch with reality after intense use of chatbots have gone viral on social media in recent weeks, with posts labeling them examples of “AI psychosis.”
Some incidents have been documented by friends or family and in news articles. They often involve people appearing to experience false or troubling beliefs, delusions of grandeur or paranoid feelings after lengthy discussions with a chatbot, sometimes after turning to it for therapy.
Lawsuits have alleged teens who became obsessed with AI chatbots were encouraged by them to self-harm or take their own lives.
“AI psychosis” is an informal label, not a clinical diagnosis, mental health experts told the Washington Post. Much like the terms “brain rot” or “doomscrolling,” the phrase gained traction online to describe an emerging behavior.
But the experts agreed that troubling incidents like those shared by chatbot users or their loved ones warrant immediate attention and further study. (The Post has a content partnership with OpenAI.)
“The phenomenon is so new and it’s happening so rapidly that we just don’t have the empirical evidence to have a strong understanding of what’s going on,” Vaile Wright, senior director for health care innovation at the American Psychological Association, said. “There are just a lot of anecdotal stories.”
Wright said the APA is convening an expert panel on the use of AI chatbots in therapy, which will publish guidance in the coming months.