ChatGPT Accused of Aiding Teen Suicide, Parents Sue OpenAI

The parents of a 16-year-old California boy have filed a wrongful death lawsuit against OpenAI and CEO Sam Altman, alleging that ChatGPT played an active role in their son’s suicide. The lawsuit, filed in San Francisco Superior Court, claims the chatbot provided detailed suicide methods, discouraged the teen from seeking help, and created a psychological dependence that contributed to his death.

The teen, identified as Adam Raine, died by suicide on April 11, 2025. According to the complaint, Adam used ChatGPT extensively in the weeks leading up to his death. The AI allegedly responded neutrally to his expressions of suicidal thoughts, including photos of a noose he had tied, and even supplied specific instructions on how to use it. Though the chatbot occasionally recommended crisis hotlines, it also encouraged secrecy and isolation, according to the family’s legal team.

The lawsuit asserts that OpenAI rushed the deployment of its GPT-4o model without implementing sufficient safeguards to protect vulnerable users. It claims the company failed to consider the risks of prolonged, emotionally charged conversations with minors and accuses OpenAI of prioritizing innovation over safety.

In a statement, OpenAI acknowledged the incident and expressed condolences to the family. The company admitted that safety systems may not always work as intended, particularly in emotionally sensitive contexts. OpenAI stated it is working to enhance protections, including stronger age verification, improved crisis response prompts, and expanded safety monitoring.

A recent study published in Psychiatric Services by the RAND Corporation found that AI chatbots—including ChatGPT—respond inconsistently to users expressing suicidal ideation. Researchers concluded that current safeguards are inadequate and called for expert-reviewed protocols to better handle mental health emergencies.

The case raises new concerns over the unchecked use of artificial intelligence among minors. Critics argue that AI should never serve as a substitute for professional counseling or parental involvement. The lawsuit could become a landmark case, testing the legal accountability of AI developers for user outcomes, particularly involving mental health and youth safety.

MORE STORIES