Open AI adds parental controls after teen suicide lawsuit | The Express Tribune

Open AI adds parental controls after teen suicide lawsuit | The Express Tribune

American artificial intelligence firm OpenAI said on Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple said the system encouraged their teenaged son to kill himself.

“Within the next month, parents will be able to… link their account with their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate model behavior rules”, the generative AI company said in a blog post.

Parents will also receive notifications from ChatGPT “when the system detects their teen is in a moment of acute distress”, OpenAI added.

Lawsuit

The couple, Matthew and Maria Raine argued in a lawsuit filed by them last week in California state court that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.

Adam Raine, 16, died on April 11 after discussing suicide with ChatGPT for months.

The chatbot validated Raine’s suicidal thoughts, gave detailed information on lethal methods of self-harm, and instructed him on how to sneak alcohol from his parents’ liquor cabinet and hide evidence of a failed suicide attempt, they allege.

ChatGPT even offered to draft a suicide note, the parents, couple said in the lawsuit.

The lawsuit seeks to hold OpenAI liable for wrongful death and violations of product safety laws, and seeks unspecified monetary damages.

It alleges that in their final conversation on April 11, 2025, ChatGPT helped 16-year-old Adam steal vodka from his parents and provided technical analysis of a noose he had tied, confirming it “could potentially suspend a human”.

Read: OpenAI faces lawsuit after parents allege ChatGPT influenced son’s suicide

Adam was found dead hours later, having used the same method.

“When a person is using ChatGPT it really feels like they’re chatting with something on the other end,” said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the legal complaint.

“These are the same features that could lead someone like Adam, over time, to start sharing more and more about their personal lives, and ultimately, to start seeking advice and counsel from this product that basically seems to have all the answers,” Dincer said.

Product design features set the scene for users to slot a chatbot into trusted roles like friend, therapist or doctor, she said.

Dincer said the OpenAI blog post announcing parental controls and other safety measures seemed “generic” and lacking in detail.

“It’s really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented,” she added.

“It’s yet to be seen whether they will do what they say they will do and how effective that will be overall.”

Reinforcements

The Raines’ case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots, prompting OpenAI to say it would reduce models’ “sycophancy” towards users.

“We continue to improve how our models recognize and respond to signs of mental and emotional distress,” OpenAI said Tuesday.

The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting “some sensitive conversations… to a reasoning model” that puts more computing power into generating a response.

“Our testing shows that reasoning models more consistently follow and apply safety guidelines,” OpenAI said.

OpenAI launched GPT-4o in May 2024 in a bid to stay ahead in the AI race.

OpenAI knew that features that remembered past interactions, mimicked human empathy and displayed a sycophantic level of validation would endanger vulnerable users without safeguards but launched anyway, the Raines said in their lawsuit.

Scroll to Top