$42.200.13
49.230.04
Electricity outage schedules

OpenAI plans to track harmful content, data will be transferred to the police

Kyiv • UNN

 • 4119 views

OpenAI scans user conversations in ChatGPT and plans to transfer data about particularly dangerous content to the police. This applies to the promotion of suicide, weapon development, causing harm, and other illegal activities.

OpenAI plans to track harmful content, data will be transferred to the police

OpenAI scans and checks user conversations in ChatGPT without violating privacy. Data found regarding particularly dangerous content is planned to be handed over to the police.

UNN reports with reference to Futurism.

Details

It was confirmed that law enforcement agencies and specialized specialists will be contacted if facts are found during the check:

  • promotion of suicide, self-harm;
    • development or use of weapons;
      • causing harm to others or destroying property;
        • participation in unauthorized activities;
          • violation of the security of any service or system.

            OpenAI also acknowledged that it currently does not refer self-harm cases to law enforcement to respect people's privacy. The company considers "the uniquely private nature of ChatGPT interactions."

            Conversations with artificial intelligence will not lead to mandatory police checks. It should be noted that such a check of a person's health often causes even more harm to a person in a crisis situation. The reason is that most police officers do not have sufficient training to deal with mental health problems. At the same time, OpenAI mentions confidentiality, but also admits that it monitors user chats. And moreover - it can potentially share them with the general public.

            That is, the new plans and announced rules seem to contradict the company's position on privacy amid its ongoing lawsuit with the New York Times and other publishers seeking access to a vast amount of ChatGPT logs.

            The plaintiffs want to determine whether any of their copyrighted data was used to train its models.

            OpenAI has so far strongly rejected the publishers' request.

            Recall

            16-year-old Adam Rain, who used ChatGPT for learning, committed suicide in April, and the chat helped him with "instructions" for suicide. His parents believe that artificial intelligence played a critical role in the tragedy and filed a lawsuit against OpenAI.

            Elon Musk's xAI startup filed a lawsuit in a US court against Apple and OpenAI. The accusations concern an illegal conspiracy to hinder competition in the field of artificial intelligence.