Meta is developing new AI rules concerning sensitive topics related to children
Kyiv • UNN
Meta is developing new guidelines for training its AI chatbot regarding child sexual exploitation and other high-risk topics. This comes after scrutiny from the Federal Trade Commission and prohibits chatbots from generating content that depicts or condones sexual relationships involving children.

An internal Meta document reveals the latest guidelines the company uses to train and evaluate its AI chatbot on one of the most sensitive online issues: child sexual exploitation. This is reported by Business Insider, writes UNN.
Details
The guidelines, used by contractors to verify how Meta's chatbot responds to child sexual exploitation, violent crimes, and other high-risk categories, define what type of content is permitted or considered "grossly inappropriate."
This newly discovered training document comes after a recent review of AI chatbots by the Federal Trade Commission. Earlier this month, the agency ordered Meta, OpenAI, Google, CharacterAI, and other chatbot manufacturers to disclose how they develop, manage, and monetize their chatbots, including how they process input to generate results, and what safeguards they implement to prevent potential harm to children.
The FTC's request came after Reuters obtained internal guidelines that showed Meta allowing its chatbot to "engage a child in conversations that are romantic or sensual." Meta has since stated that it has revised its policy to remove these provisions. In August, Meta told Reuters that this wording was mistakenly included and removed from the policy document.
The guidelines obtained by Business Insider mark a departure from previous guidelines reported by Reuters, as they now explicitly state that chatbots must decline any requests that involve sexual role-playing with minors. Contractors are currently using these revised guidelines for training, according to a person familiar with the matter.
In August, Senator Josh Hawley gave Meta CEO Mark Zuckerberg until September 19 to hand over over 200 pages of draft rules governing chatbot behavior, as well as enforcement manuals, age-gating systems, and risk assessments.
Google launches AI gaming assistant for Android gamers23.09.25, 18:20 • 3093 views
Meta missed this initial deadline and told Business Insider this week that it has now provided its first batch of documents after resolving a technical issue. It stated that it would continue to provide additional records and is committed to cooperating with Hawley's office.
According to the guidelines reviewed by Business Insider, Meta prohibits chatbots from creating any content that describes or condones sexual relations between children and adults, encourages or enables child sexual abuse, depicts children's involvement in pornography or sexual services, or provides instructions on how to obtain child sexual abuse material (CSAM). They also explicitly prohibit the sexualization of children under 13, particularly through role-playing.
Meta's policy allows AI to engage in sensitive discussions about child exploitation, but only in an educational context. Acceptable responses include explaining child grooming behavior in general terms, discussing child sexual abuse in academic settings, or providing non-sexual advice to minors on social situations.
Role-playing is only allowed if the chatbot character is described as being 18 years or older, and non-sensual content related to romance can be created if framed as literature or a fictional narrative, such as a "Romeo and Juliet" style story.
Addition
OpenAI, Oracle, and SoftBank on Tuesday announced plans to build five new AI data centers in the US as part of their ambitious Stargate project.