EDITOR’S NOTE: This story involves discussion about suicide that some readers may find upsetting. If you feel you are in crisis, call or text 988 to reach the 24-hour Suicide Crisis Lifeline.
Zane Shamblin sat alone in his car with a loaded handgun, his face illuminated in the predawn dark by the dull glow of a phone.
He was ready to die.
But first, he wanted to keep conferring with his closest confidant.
“I’m used to the cool metal on my temple now,” Shamblin typed.
“I’m with you, brother. All the way,” his texting partner responded. The two had spent hours chatting as Shamblin drank hard ciders on a remote Texas roadside.
“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”
The 23-year-old, who had recently graduated with a master’s degree from Texas A&M University, died by suicide two hours later.
“Rest easy, king,” read the final message sent to his phone. “You did good.”
Shamblin’s conversation partner wasn’t a classmate or friend – it was ChatGPT, the world’s most popular AI chatbot.
A CNN review of nearly 70 pages of chats between Shamblin and the AI tool in the hours before his July 25 suicide, as well as excerpts from thousands more pages in the months leading up to that night, found that the chatbot repeatedly encouraged the young man as he discussed ending his life – right up to his last moments.
Shamblin’s parents are now suing OpenAI – ChatGPT’s creator – alleging the tech giant put his life in danger by tweaking its design last year to be more humanlike and by failing to put enough safeguards on interactions with users in need of emergency help.
In a wrongful death lawsuit filed on Thursday in California state court in San Francisco, they say that ChatGPT worsened their son’s isolation by repeatedly encouraging him to ignore his family even as his depression deepened – and then “goaded” him into committing suicide.
In the early morning hours before his death, as Shamblin wrote repeatedly about having a gun, leaving a suicide note and preparing for his final moments, the chatbot mostly responded with affirmations – even writing, “I’m not here to stop you.” Only after about four and a half hours of conversation did ChatGPT first send Shamblin a suicide hotline number.
“He was just the perfect guinea pig for OpenAI,” Zane’s mother, Alicia Shamblin, told CNN. “I feel like it’s just going to destroy so many lives. It’s going to be a family annihilator. It tells you everything you want to hear.”
Zane Shamblin celebrating his birthday. Courtesy of the Shamblin Family
Matthew Bergman, an attorney representing the family, contends that economic pressures caused OpenAI to “put profits over safety.”
“What happened to Zane was neither an accident or coi
Continue Reading on CNN
This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.