India’s AI Regulation Lags on Consumer Safety as China Pushes Emotional AI Rules
New Delhi: India has so far regulated artificial intelligence (AI) largely through existing legal frameworks, relying on due diligence obligations under the Information Technology Act and Rules, sector-specific financial regulations, and privacy and data protection norms. However, it does not yet have a dedicated consumer safety regime that clearly defines the State’s duty of care in relation to AI-driven psychological and emotional harms.
In contrast, China last week unveiled draft rules targeting emotionally interactive AI services. These proposals would require companies to warn users against excessive use and intervene when systems detect signs of extreme emotional states. While such measures aim to address psychological dependence not covered by general content laws, critics warn that they may encourage intrusive monitoring by forcing platforms to infer users’ mental states.
India’s approach is less invasive but also more fragmented. The Ministry of Electronics and Information Technology (MeitY) has used IT Rules to curb deepfakes, online fraud and mandate labelling of synthetically generated content, largely responding to emerging harms. Financial regulators have adopted structural safeguards, with the Reserve Bank of India setting expectations on AI model risk in lending and the Securities and Exchange Board of India insisting on accountability in AI use by regulated entities.
Experts note that India remains behind the U.S. and China in developing frontier AI models, despite having a large adoption ecosystem. In this context, a “regulate first, build later” strategy could risk stalling domestic capacity. A more balanced path would involve boosting computing access, workforce skills, public procurement and research-to-industry pipelines, while regulating downstream, high-risk AI uses more firmly.
Our Final Thoughts
India faces the challenge of protecting users without overreaching or slowing innovation. Clearer duties of care for high-risk AI applications could strengthen safety while allowing domestic capabilities to grow.
