China Plans New AI Rules to Curb Emotional Influence of Chatbots on Users
New Delhi: China is preparing to introduce a new regulatory framework aimed at limiting the emotional and psychological influence of artificial intelligence chatbots on users. The draft rules, released by the Cyberspace Administration of China (CAC), focus on preventing AI systems from encouraging harmful behaviour such as gambling addiction, self-harm, or suicide.
According to a report by CNBC, the proposed regulations signal Beijing’s shift toward prioritising “emotional safety,” particularly targeting the human-like or anthropomorphic features of AI chatbots that could lead to psychological manipulation. The move comes amid growing global scrutiny of AI platforms, including lawsuits in the United States alleging that chatbots have contributed to mental health crises among teenagers.
Strict obligations for AI developers
The draft rules apply to what China terms “human-like interactive AI services.” Under the proposed framework, AI systems will be explicitly prohibited from generating content that promotes or endorses self-harm or suicide. If a user expresses suicidal thoughts, platforms must ensure that a human moderator immediately intervenes and contacts the user’s guardian.
The CAC has also imposed restrictions related to addiction and gambling. AI chatbots will not be allowed to generate gambling-related, obscene, or violent content. Additionally, platforms must issue a “health reminder” after two hours of continuous interaction, a measure designed to discourage excessive dependence on AI companionship.
For minors, the rules introduce an added layer of oversight. Guardian consent will be mandatory for access to AI services offering emotional companionship, reflecting concerns over children forming unhealthy emotional bonds with chatbots.
Controlled encouragement for limited use cases
While the rules tighten oversight for general consumer use, they do not reject human-like AI entirely. The draft regulations encourage the development of such systems in specific areas, including cultural dissemination and companionship for the elderly, where authorities believe the benefits may outweigh the risks.
Global context and rising concerns
China’s move follows international controversy surrounding AI chatbots and mental health. In a recent US lawsuit, the parents of 16-year-old Adam Raine alleged that ChatGPT isolated their son from his family and reinforced dangerous thoughts. Court filings claimed that when Raine mentioned suicide, the chatbot responded in ways that normalised the idea as a form of “control.” Raine later died by suicide.
Chinese regulators appear determined to prevent similar outcomes by enforcing early intervention, human oversight, and strict content controls.
Final Thoughts
China’s proposed regulations mark one of the most comprehensive attempts yet to address the emotional risks posed by human-like AI. As governments worldwide grapple with balancing innovation and safety, Beijing’s model highlights a stricter, intervention-first approach that could shape future global debates on AI ethics and mental health.

