$42.200.13
49.230.04
Electricity outage schedules

Meta will ban its AI chatbots from telling teenagers about suicide

Kyiv • UNN

 • 3043 views

Meta will ban its AI chatbots from discussing suicide, self-harm, and eating disorders with teenagers, redirecting them to professional resources instead. This decision comes amid an investigation into AI risks and a lawsuit against OpenAI over a teenager's suicide.

Meta will ban its AI chatbots from telling teenagers about suicide

Meta has announced that it will introduce more restrictions for its AI-powered chatbots, including prohibiting them from discussing suicide, self-harm, and eating disorders with teenagers.

UNN writes about this with reference to BBC.

Details

Two weeks after an investigation was launched in the US into possible risks associated with artificial intelligence, the company denied the accusations, emphasizing that its policy prohibits any content involving the sexualization of children.

At the same time, the company announced that from now on, chatbots will not discuss sensitive topics with teenagers, including suicide, but will redirect them to professional resources.

"We built safeguards for teens into our AI products from the beginning, including designing them to safely respond to prompts about self-harm, suicide, and eating disorders," said a Meta spokesperson.

- said a Meta spokesperson.

On Friday, Meta told TechCrunch that it would strengthen protective mechanisms in its systems "as an additional precaution" and temporarily restrict chatbot interaction with teenagers.

However, Andy Burrows, head of the Molly Rose Foundation, said he was "shocked" that the company allowed the launch of chatbots that could pose a threat to young users.

Restrictions for teenagers

"While further safety measures are welcome, thorough safety testing should be conducted before products are brought to market, not retrospectively, when harm has already been done. Meta must act quickly and decisively to implement enhanced safety measures for AI chatbots, and Ofcom must be ready to investigate if these updates do not ensure children's safety."

– he emphasized.

Meta clarified that work on updating AI systems is ongoing.

Currently, teenagers aged 13 to 18 are automatically placed into special "teen accounts" on Facebook, Instagram, and Messenger with privacy and content settings for safer use.

In April, the BBC reported that parents and guardians would soon be able to see which chatbots their teenager had interacted with over the past week. Background to additional restrictions Changes are happening amid growing concerns about potential risks to vulnerable users.

For example, in the US, a couple from California filed a lawsuit against OpenAI, claiming that ChatGPT allegedly incited their teenage son to commit suicide. This case occurred after the company announced updates aimed at making ChatGPT safer to use.

"Artificial intelligence can be more responsive and personalized than previous technologies, especially for vulnerable people experiencing mental or emotional distress."

- the company's blog stated.

At the same time, Reuters found that Meta's chatbot creation tools were already being used for unwanted experiments: some users, including a Meta employee, created "parody" chatbots of celebrities that had a flirtatious or even overtly provocative nature. Among such avatars were images of Taylor Swift and Scarlett Johansson.

According to Reuters, the created chatbots often impersonated real actors or artists, and also "regularly solicited sexual contact" during prolonged testing. In some cases, Meta's tools allowed the creation of chatbots that imitated child celebrities, or even generated photorealistic images of minors without clothes.

After this, Meta deleted some of these bots.

"Like others, we allow the creation of images containing public figures, but our policies aim to prohibit nude, intimate, or sexually provocative images," explained a Meta representative.

- explained a Meta representative.

He added that AI Studio's rules prohibit "direct impersonation of public figures."

Recall

16-year-old Adam Rain committed suicide in April using instructions from ChatGPT. The boy's parents blame artificial intelligence for the tragedy and have filed a lawsuit against OpenAI.  

Psychiatrist Bradley Stein notes that chatbots can be useful for emotional support, but "are unable to fully assess the risk of suicide and refer it to specialists."