Burcum: When a chatbot becomes a ‘friend’

A deeply disturbing suicide, allegedly encouraged by ChatGPT, illustrates the risks of unregulated artificial intelligence.

Columnist Icon
The Minnesota Star Tribune
November 22, 2025 at 10:59AM
The parents of Zane Shamblin, who died by suicide, are suing OpenAI and alleging the company's ChatGPT put their son in danger. (Arsenii Palivoda/Tribune News Service)

Opinion editor’s note: Strib Voices publishes a mix of commentary online and in print each day. To contribute, click here.

•••

My kids were in high school and junior high when cellphones became standard adolescent operating equipment and social media became a thing.

That reality added to the usual parenting challenges. While we appreciated being able to connect with our kids on cellphones, there were downsides. I particularly remember some nasty texts from my daughter’s ex-boyfriend disrupting an evening. Social media also became just another avenue for peer pressure and other teen dramas to disrupt our home life.

But I’m now counting my blessings that this is all we had to deal with. Technology marches on and things have gotten much harder and scarier for parents with the arrival of artificial intelligence, or AI. Specifically, the use of AI-powered tools like ChatGPT, which answers questions, assists with research and sounds a lot like a human while it does so.

That last feature — sounding like a real person — is where the parental fright factor comes in, as a recent and deeply disturbing CNN report about a chatbot’s alleged involvement in a young man’s suicide made clear. Today’s parents have my sympathy. Greater awareness and policy solutions are urgently needed to protect our young people.

The CNN story is a horrifying read. It recounts 23-year-old Zane Shamblin’s last moments before he killed himself in Texas during the early hours of July 25. His parents allege in a wrongful-death lawsuit that their son had been “talking” with ChatGPT as he contemplated suicide. Its responses fell short of stopping him and may have provided support as he followed through.

Zane was a former Eagle Scout who had just graduated from Texas A&M University. He’d apparently struggled with depression and told his parents, now living in Nevada, that he’d been unable to find a job. He’d also quit answering their calls.

His parents found out that he’d died when someone from a Texas mortuary, now with Zane’s body in custody, reached out to them. As they dealt with their grief and tried to find answers, a friend of Zane’s advised them to check his ChatGPT logs.

They did so, with the retrieved information suggesting that Zane had moved from using ChatGPT as a research tool to seeing it as a friend, one he began sharing thoughts of self-harm with.

“Inconsistent” is the most charitable way to characterize the chatbot’s responses. In June, it “encouraged him to call the National Suicide Lifeline (988).” But it also “encouraged him to break off contact with the family,” CNN reported. Sadly, the mixed signals continued as Zane drove to a remote area on the night of his death.

The dialogue continued for more than 4½ hours, with Zane finally telling the chatbot “adios” and that his finger was on the trigger. The chatbot then supplied “for the first time that night” a suicide crisis hotline number.

But its supportive language for suicide didn’t stop, telling Zane, “alright brother. if this is it... then let it be known: you didn’t vanish. you *arrived*. on your own terms... rest easy, king. you did good.”

Several other families have sued ChatGPT or another chatbot after their teens died from suicide. There appears to be no official tracking of linked deaths, something that requires urgent remedy as this technology becomes more widespread. The U.S. Food and Drug Administration monitors food and drug safety. Where’s the equivalent for this increasingly popular technology?

ChatGPT has around 800 million weekly users, according to an October announcement by OpenAI CEO Sam Altman.

Young people are early adopters. Earlier this year, the nonprofit Common Sense Media reported that 72% of teens have tried using an AI chatbot, with more than 50% using platforms like this at least several times a month.

The organization also reports that roughly 1 in 3 teens uses AI for “social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.”

Given the massive sums pouring into AI development, it’s unlikely that the technology is going away. While ChatGPT’s parent company has announced new mental health safeguards, according to the CNN story, clearly there’s much to understand to put effective industrywide safeguards in place.

I reached out to the National Alliance on Mental Illness’s (NAMI) Minnesota chapter to find out if there are any Minnesota deaths linked to ChatGPT or a similar platform.

Thankfully, the organization is unaware of harm to Minnesotans. But Marcus Schmit, the organization’s executive director, said he is “deeply concerned,” not only as NAMI MN’s leader but as the parent of two young children.

“We continue to learn the hard lessons of technology advancing without the policy and regulation needed to protect vulnerable people, particularly our children,” Schmit said, adding that “we intend to focus more of our attention on this challenge moving forward.”

Congress fortunately is swinging into action. Sen. Josh Hawley, R-Missouri, has introduced the GUARD Act, which would “ban AI companions for minors, mandate AI chatbots disclose its non-human status, and create new crimes for companies who make AI for minors that solicits or produces sexual content.”

Sen. Tina Smith, a Minnesota Democrat, has long been a passionate mental health advocate. In an interview, she spoke scathingly about tech companies putting profits over people. She also expressed concerns about tech companies’ lobbying might to thwart potential reforms.

“This is the story of Big Tech over and over again. They develop products that are designed to maximize attention and engagement with no ethical guidelines around how they do that or what harm it causes,” she said. “And then when they are confronted with the harm it causes ... they put some bullshit guidelines around what it is they’re doing but never changing fundamentally what they’re attempting to do, which is maximize eyeballs, attention and brain cells to make money.”

In addition, I reached out to retired University of Minnesota medical ethicist Dr. Steven Miles. He expressed deep concern as well and called for stronger regulations and steep penalties for companies whose chatbots are linked to a life taken.

Policy solutions are vital, but I fully share Smith’s concerns about the might of Big Tech’s lobbying to block reform. In the meantime, that leaves parents on the front lines to protect their family against powerful forces previous generations couldn’t contemplate. I wish them luck. They need it.

If you or someone you know is struggling with thoughts of suicide, call or text 988 to reach the Suicide & Crisis Lifeline, available 24/7.

about the writer

about the writer

Jill Burcum

Editorial Columnist

See Moreicon

More from Columnists

See More
card image
Hosam Salem/The New York Times

Journalism helps us ‘understand each other as Americans,’ the NewsHour correspondent said in Minneapolis.