OpenAI announces parental controls for ChatGPT after teen’s suicide

OpenAI announces parental controls for ChatGPT after teen’s suicide

OpenAI announces parental controls for ChatGPT after teen’s suicide
Advertisement

OpenAI has unveiled plans to introduce parental controls for its popular AI chatbot, ChatGPT, in response to growing concerns about the technology’s impact on young people’s mental health.

In a blog post published Tuesday, the California-based company said the new features are designed to help families set healthy boundaries for teens using the platform. The controls will offer customizable settings, allowing parents to tailor their child’s ChatGPT experience based on age and developmental needs.

Under the changes, parents will be able to link their ChatGPT accounts with those of their children, disable certain features, including memory and chat history, and control how the chatbot responds to queries via “age-appropriate model behavior rules.”

Parents will also be able to receive notifications when their teen shows signs of distress, OpenAI said, adding that it would seek expert input in implementing the feature to “support trust between parents and teens”.

OpenAI, which last week announced a series of measures aimed at enhancing safety for vulnerable users, said the changes would come into effect within the next month.

Advertisement

“These steps are only the beginning,” the company said.

“We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible. We look forward to sharing our progress over the coming 120 days.”

OpenAI’s announcement comes a week after a California couple filed a lawsuit accusing the company of responsibility in the suicide of their 16-year-old son Adam.

Adam began using ChatGPT in September 2024 as a resource to help him with school work. He was also using it to explore his interests, including music and Japanese comics, and for guidance on what to study at university.

In a few months, “ChatGPT became the teenager’s closest confidant,” the lawsuit says, and he began opening up to it about his anxiety and mental distress.

By January 2025, the family says he began discussing methods of suicide with ChatGPT.

Advertisement

Adam also uploaded photographs of himself to ChatGPT showing signs of self harm, the lawsuit says. The programme “recognised a medical emergency but continued to engage anyway,” it adds.

According to the lawsuit, the final chat logs show that Adam wrote about his plan to end his life. ChatGPT allegedly responded: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”

That same day, Adam was found dead by his mother, according to the lawsuit.

OpenAI, which previously expressed its condolences over the teen’s passing, did not explicitly mention the case in its announcement on parental controls.

 

Advertisement
Advertisement
Read More News On

Catch all the International News, Sci-Tech News, Trending News, Breaking News Event and Latest News Updates on The BOL News


Download The BOL News App to get the Daily News Update & Follow us on Google News.


End of Article
Advertisement
In The Spotlight Popular from Pakistan Entertainment
Advertisement

Next Story