Opinion editor’s note: Strib Voices publishes a mix of commentary online and in print each day. To contribute, click here.
•••
My kids were in high school and junior high when cellphones became standard adolescent operating equipment and social media became a thing.
That reality added to the usual parenting challenges. While we appreciated being able to connect with our kids on cellphones, there were downsides. I particularly remember some nasty texts from my daughter’s ex-boyfriend disrupting an evening. Social media also became just another avenue for peer pressure and other teen dramas to disrupt our home life.
But I’m now counting my blessings that this is all we had to deal with. Technology marches on and things have gotten much harder and scarier for parents with the arrival of artificial intelligence, or AI. Specifically, the use of AI-powered tools like ChatGPT, which answers questions, assists with research and sounds a lot like a human while it does so.
That last feature — sounding like a real person — is where the parental fright factor comes in, as a recent and deeply disturbing CNN report about a chatbot’s alleged involvement in a young man’s suicide made clear. Today’s parents have my sympathy. Greater awareness and policy solutions are urgently needed to protect our young people.
The CNN story is a horrifying read. It recounts 23-year-old Zane Shamblin’s last moments before he killed himself in Texas during the early hours of July 25. His parents allege in a wrongful-death lawsuit that their son had been “talking” with ChatGPT as he contemplated suicide. Its responses fell short of stopping him and may have provided support as he followed through.
Zane was a former Eagle Scout who had just graduated from Texas A&M University. He’d apparently struggled with depression and told his parents, now living in Nevada, that he’d been unable to find a job. He’d also quit answering their calls.