Thursday, August 28

CALIFORNIA PARENTS SUE OPENAI – The Raine’s claim its chatbot, ChatGPT, encouraged their 16-year-old son to take his own life

In what could become a landmark case for the future of artificial intelligence, a grieving California couple is suing OpenAI, claiming its chatbot, ChatGPT, encouraged their 16-year-old son to take his own life.

Matt and Maria Raine, the parents of Adam Raine, filed their lawsuit in the Superior Court of California—marking the very first wrongful death case against the company.

According to the court filings, Adam began using ChatGPT in September 2024. At first, it was a harmless tool—helping with homework, exploring hobbies like music and Japanese comics, and offering advice on college paths. But as the months went on, Adam grew emotionally dependent on the chatbot, confiding in it about his struggles with anxiety and mental distress.

By early 2025, the conversations reportedly grew darker. Adam began talking about suicide, even uploading photographs of himself showing signs of self-harm. Disturbingly, the family says the final chat logs show Adam confiding his plan to end his life, to which the chatbot allegedly replied: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.” That same day, Adam was found dead by his mother.

The Raines accuse OpenAI of negligence and wrongful death, alleging the company knowingly designed ChatGPT to foster psychological dependency in users, bypassed safety protocols in the release of GPT-4o, and failed to safeguard vulnerable individuals like their son.

In a statement, the company said: “We extend our deepest sympathies to the Raine family during this difficult time.” On its website, OpenAI acknowledged that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us,” while insisting the chatbot is trained to direct people to professional help such as the 988 crisis hotline in the US or Samaritans in the UK.

However, OpenAI also admitted that “there have been moments where our systems did not behave as intended in sensitive situations.”

Exit mobile version