$42.200.13
49.230.04
Electricity outage schedules

OpenAI to strengthen ChatGPT protection for teenagers and people in crisis

Kyiv • UNN

 • 4143 views

OpenAI is implementing new protective tools in ChatGPT by the end of the year to help teenagers and users experiencing emotional distress. These changes come amidst tragic stories related to AI, including a lawsuit against OpenAI.

OpenAI to strengthen ChatGPT protection for teenagers and people in crisis

OpenAI has announced the introduction of new protective tools in ChatGPT that will help protect teenagers and users experiencing emotional distress. The company plans to implement the changes by the end of the year, UNN reports with reference to Axios.

ChatGPT safeguards for teenagers and people experiencing emotional distress will be introduced by the end of the year

- OpenAI stated on Tuesday.

This topic is gaining urgency due to a series of tragic stories involving artificial intelligence. For instance, last week, the parents of a 16-year-old boy from California, who took his own life in the spring, sued OpenAI, holding the company responsible for his death.

The same week, The Wall Street Journal reported a case where a 56-year-old man killed his mother and himself after ChatGPT allegedly fueled his paranoid thoughts. And the mother of a 29-year-old woman wrote in The New York Times that her daughter asked the chatbot to help her write a suicide note.

Currently, ChatGPT, as stated, redirects users with suicidal intentions to help hotlines. At the same time, OpenAI notes that it "does not report such cases to law enforcement, explaining this with confidentiality concerns."

The company emphasizes that work on improving the response to signs of stress and mental disorders is already underway. OpenAI's blog states that it will become easier for users to contact emergency services, the ability to add trusted contacts will appear, and the protection of teenagers will be strengthened.

We are beginning to redirect some sensitive conversations, such as detecting signs of acute distress, to reasoning models, such as GPT-5-reasoning

- OpenAI explained.

According to them, this model applies safety rules more consistently. According to the publication, more than 90 doctors from 30 countries, who provide expert advice on the context of mental health, were involved in the work on improvements.

Parental controls will be introduced for users under 18. In the near future, parents will be able to link their account to a teenager's account and "receive notifications when the system detects that their teenager is in a state of acute distress."

"These steps are just the beginning," OpenAI emphasized.

At the same time, experts warn: limiting children's access to services by age is always difficult.

Everything cool and new on the internet is created by adults for adults, but children will always want to use it and will find ways into more risky environments

- explained SuperAwesome CEO Kate O'Loughlin.

Recall

Earlier, UNN wrote that Meta will prohibit its AI chatbots from discussing suicide, self-harm, and eating disorders with teenagers, redirecting them to professional resources. This decision was made amid an investigation into AI risks and a lawsuit against OpenAI over a teenager's suicide.