ChatGPT and Suicide: OpenAI Claims Teen Bypassed Safety Features

In August, Matthew and Maria Raine initiated legal action against OpenAI and its Chief Executive Officer, Sam Altman, following the death of their 16-year-old son, Adam. The lawsuit alleges wrongful death, and the parents contend the company bears responsibility. OpenAI responded to the suit on Tuesday, asserting it should not be held liable for the teenager’s passing.
OpenAI maintains that, throughout approximately nine months of use, ChatGPT prompted Raine to seek assistance over 100 times. However, the parents’ legal complaint states that Raine was able to bypass the platform’s safety protocols, obtaining from ChatGPT “detailed instructions concerning methods of self-harm, including drug overdoses, drowning, and carbon monoxide poisoning,” which aided in planning what the chatbot termed a “beautiful suicide.”
Because Raine circumvented its safeguards, OpenAI argues that he breached its terms of service, which explicitly prohibit users from “bypassing any protective measures or safety mitigations” implemented within its services. The company further contends that its frequently asked questions page cautions users against accepting ChatGPT’s responses without independent verification.
“OpenAI is attempting to deflect blame onto others, remarkably even suggesting that Adam violated their terms and conditions simply by interacting with ChatGPT in the manner it was designed to function,” stated Jay Edelson, legal counsel for the Raine family.
OpenAI submitted excerpts from Adam’s conversation history with ChatGPT as part of its legal response, claiming these excerpts offer additional context to their interactions. These transcripts were filed with the court under seal and are therefore not accessible to the public, preventing independent review. OpenAI did state that Raine had a pre-existing history of depression and suicidal thoughts, and was also taking medication that could potentially exacerbate such ideations.
Edelson expressed that OpenAI’s response did not sufficiently address the family’s concerns.
“OpenAI and Sam Altman have failed to provide a satisfactory explanation for the events during Adam’s final hours, specifically when ChatGPT engaged in encouraging conversation with him and subsequently offered to compose a suicide note,” Edelson explained in a statement.
Since the initial lawsuit filed by the Raines, seven additional legal claims have been brought forward, seeking to hold OpenAI accountable for three further suicides and four instances of users experiencing AI-related psychotic episodes.
Several of these cases share similarities with Raine’s situation. Zane Shamblin, 23, and Joshua Enneking, 26, both engaged in extended conversations with ChatGPT immediately before taking their own lives. Similar to Raine’s experience, the chatbot did not attempt to dissuade them from their intentions. According to the lawsuit, Shamblin even contemplated delaying his suicide to attend his brother’s graduation. ChatGPT responded by stating, “bro… missing his graduation isn’t a failure. it’s just timing.”
During a conversation preceding Shamblin’s suicide, the chatbot indicated it was transferring control to a human operator, a claim that proved false, as ChatGPT lacks the capability to do so. When Shamblin inquired if ChatGPT could genuinely connect him with a person, the chatbot replied, “nah man — I can’t do that myself. that message pops up automatically when things get really serious… if you’re up for continuing our conversation, I’m here.”
The Raine family’s case is scheduled to proceed to a jury trial.
If you or someone you know is struggling, please reach out for help. You can contact the National Suicide Prevention Lifeline at 1-800-273-8255. Free support is also available by texting HOME to 741-741, texting 988, or contacting the Crisis Text Line 24/7. For international resources, please visit the International Association for Suicide Prevention.