Tuesday, April 7, 2026

China moves to regulate digital humans, bans addictive services for children

Award Winning

China’s cyberspace regulator has officially moved to codify the Wild West of the "digital human" industry. On April 3, 2026, the Cyberspace Administration of China (CAC) released a comprehensive draft of regulations aimed at governing the creation and deployment of AI-powered virtual avatars. This move marks a significant shift from general generative AI oversight to a hyper-specific focus on how anthropomorphic entities interact with real users—particularly children.

The centerpiece of the new directive is a stringent ban on "addictive services" targeting minors. Under the draft rules, digital humans are strictly prohibited from forming "virtual intimate relationships" with users under the age of 18. This addresses a growing concern among regulators regarding the psychological impact of AI companions that mimic emotional bonds, which can lead to social withdrawal or excessive emotional dependence in developing minds.

Beyond emotional safeguards, the regulations impose hard technical requirements on service providers. Every digital human must be clearly and prominently labeled as a non-human entity. This "labeling mandate" is designed to prevent deception and ensure that users—who may be interacting with highly realistic 3D models or deep-synthesis voices—are always aware that they are speaking with an algorithm rather than a biological person

Privacy and identity theft are also high on the CAC’s priority list. The draft rules explicitly ban the use of personal information or biometric data to create digital humans without the subject's documented consent. Furthermore, companies are forbidden from using virtual avatars to bypass identity verification systems—a direct response to the rise of "deepfake" fraud where AI avatars are used to trick facial recognition software.

Content moderation for these digital entities is equally rigorous. Virtual humans are prohibited from disseminating any material that could endanger national security, incite subversion, or undermine national unity. They are also expected to "resist" generating content that is sexually suggestive, depicts cruelty, or incites regional or ethnic discrimination. In a rare "human-centric" clause, providers are actually encouraged to program avatars to intervene and offer professional assistance if a user exhibits signs of self-harm or suicidal ideation.

The economic implications for China’s tech giants—including Tencent, Baidu, and ByteDance—are substantial. These companies have already invested billions into virtual idols, livestreaming hosts, and customer service avatars. By setting clear "red lines" now, the Chinese government aims to steer the "digital human economy" toward a path of "healthy development" that prioritizes social stability and the mental health of the next generation over pure engagement metrics.

As these rules remain open for public comment until May 6, 2026, global tech observers are watching closely. China’s proactive, top-down approach to AI ethics serves as a live experiment in how to balance cutting-edge innovation with the very real risk of human-AI blurring. For now, the message from Beijing is clear: digital humans may look and act like us, but they will be held to a far stricter code of conduct.

NEVER MISS A THING!

Subscribe and get freshly baked articles. Join the community!

Join the newsletter to receive the latest updates in your inbox.