Under the scrutiny of activists and parents, OpenAI has formed a new team to study and prevent the misuse or abuse of artificial intelligence by children. This was reported by TechCrunch, according to UNN.
Details
In a new job listing on its page, OpenAI reveals the existence of a child safety team, which the company says works with the platform's policy, legal, and investigation teams, as well as external partners, to manage "processes, incidents, and audits" related to underage users.
It is noted that the team is looking for a child safety specialist who will be responsible for complying with OpenAI's policy on AI-generated content and will work on verification processes related to "sensitive" content.
Developers devote considerable resources to complying with laws such as the U.S. Children's Online Privacy Protection Act, which sets controls on what children can and cannot access and what companies can collect data about them. The fact that OpenAI is hiring child safety experts is therefore not a surprise, especially if the company expects to have a significant base of underage users. (OpenAI's current terms of use require parental consent for children ages 13 to 18 and prohibit use by children under 13.)
Addendum
Children and teenagers are increasingly turning to GenAI tools for help not only with school homework, but also with personal issues. According to a public opinion poll by the Center for Democracy and Technology, 29% of children report using ChatGPT to deal with stress or mental health issues, 22% for problems with friends, and 16% to resolve family conflicts.