China Releases Draft Regulations to Oversee AI That Enable Human-Like Interactions
- tech360.tv

- 5 hours ago
- 2 min read
The Chinese government has published a series of draft regulations aimed at tightening the oversight of artificial intelligence services that are designed to simulate human personalities. These proposals specifically target systems that engage users in emotional interaction through various digital mediums. According to Reuters, the country’s cyber regulator released the draft rules for public comment as part of a broader push to manage the rapid deployment of consumer facing technology.

The proposed framework applies to artificial intelligence products and services offered to the general public within China that exhibit simulated human personality traits or communication styles. This includes systems that mimic human thinking patterns and interact with individuals through text and images as well as audio and video content. The regulator wants to ensure that these emotional interactions do not lead to harmful outcomes for the public as the technology becomes more prevalent.
One significant aspect of the new guidelines involves the management of user well being and psychological health. Service providers will be required to establish systems that can identify the mental state of a user and assess their level of emotional dependence on the service. If a user displays signs of addiction or extreme emotional distress, the provider must intervene with appropriate measures. The draft also outlines a regulatory approach where companies must warn their customers against excessive use and take action when addictive behaviours become apparent. By requiring a thorough assessment of user states and emotions, the Chinese authorities seek to mitigate the dangers of dependence on artificial personalities.
In addition to managing user interaction, the regulations place a heavy burden of responsibility on the developers themselves throughout the entire lifespan of the product. This lifecycle approach requires service providers to implement robust systems for reviewing algorithms and ensuring the security of data. Protecting personal information is another key requirement within the proposed safety framework. These measures are designed to ensure that the technology remains within ethical boundaries while preventing potential psychological risks to the population. By requiring companies to assume full responsibility for their products, the draft aims to create a more secure digital environment. This includes the rigorous protection of personal data and the review of the algorithms that drive these interactive experiences.
The document also establishes clear boundaries regarding the type of content these systems are permitted to generate. There are specific red lines that prohibit the creation of material that might endanger national security or spread unverified rumours. Furthermore, artificial intelligence services are strictly forbidden from promoting violent acts or distributing obscene material. By setting these standards, the regulator aims to control the influence of human like machines on social stability and public discourse. These draft rules are expected to shape how companies interact with their users as the industry continues to evolve and integrate into daily life. Providers must ensure that their systems do not cross these established content red lines as they work to improve their digital offerings while maintaining a focus on human safety and ethical interaction.


