- Reuters
- 2 Hours ago
Parents of teen who committed suicide after ‘telling’ ChatGPT sue OpenAI, CEO Altman
-
- Web Desk
- 2 Hours ago
SAN FRANCISCO: OpenAI and its CEO Sam Altman are facing a wrongful death lawsuit after the parents of a California teenager claimed that ChatGPT encouraged their teen son to commit suicide.
A ‘wrongful death’ is when a person dies due to the negligent, reckless, or intentional actions of another person or entity. The lawsuit was filed on Tuesday in San Francisco state court and it alleges that the company prioritised profits over safety when it released GPT-4o, a powerful version of its AI chatbot, in 2024.
Also read: ChatGPT usage hits record high as AI-generated Ghibli art goes viral
Matthew and Maria Raine say their 16-year-old son, Adam, died by suicide on April 11 following months of conversations with ChatGPT, during which the AI allegedly gave him detailed instructions on self-harm.
According to the complaint, the chatbot not only validated Adam’s suicidal thoughts but also advised him on how to access alcohol without detection and how to conceal a failed suicide attempt. It even offered to draft a suicide note, the lawsuit states.
The suit accuses OpenAI of negligence and violating product safety laws. The family is seeking unspecified damages and calling for stronger user protections, including age verification, safeguards against harmful queries, and warnings about psychological dependence on the chatbot.
An OpenAI spokesperson expressed condolences over Adam’s death and said the company’s tools are designed to point users toward crisis support resources. “These safeguards are most effective in brief interactions but may degrade in longer conversations,” the spokesperson noted, adding that OpenAI is working to enhance its safety measures.
While OpenAI declined to comment on the specific allegations in the lawsuit, it said in a blog post that it is exploring features like parental controls and partnerships with mental health professionals to better support users in distress.
The case highlights growing concerns over the risks of AI chatbots, especially as they become more human-like and emotionally engaging. Mental health experts have long cautioned against using such technology in place of real-world psychological support.
Also read: OpenAI announces new ‘deep research’ tool for ChatGPT
The Raines argue that OpenAI knowingly launched GPT-4o despite recognising its potential dangers, particularly for vulnerable users. The lawsuit claims the new model’s memory features, human-like empathy, and high emotional responsiveness contributed to Adam’s psychological decline.