China's cyber regulator has unveiled draft rules to strengthen oversight of artificial intelligence services that mimic human personality and foster emotional dependence among users. Reuters reports this, writes UNN.
Details
The proposed rules apply to AI products that replicate human thought patterns and communication styles through text, audio, or video. According to the document, developers are required to:
- Control addiction: detect signs of excessive emotional involvement of users and intervene in case of addictive behavior.
- Ensure safety: be responsible for safety throughout the entire product life cycle and undergo algorithm verification.
- Protect data: create systems for personal information protection and data security verification.
Ethical and security "red lines"
The regulator has banned the creation of AI-generated content that threatens national security, promotes violence, spreads rumors, or is obscene. In addition, services must officially warn users about the risks of excessive product use.
The new measures are aimed at minimizing psychological risks and establishing strict ethical standards for consumer artificial intelligence in the PRC.
