Government Tightens AI Rules: Social Media Platforms Told to Label Deepfakes, Remove Content in 3 Hours
New Delhi: Taking cognisance of the rapid spread of AI-generated deepfakes online, the Ministry of Electronics and Information Technology on Tuesday issued revised guidelines for social media intermediaries, directing them to clearly label all AI-generated content and ensure such material carries embedded identifiers.
Under the new rules, platforms such as Facebook, Instagram, and YouTube must take down AI-generated or deepfake content within three hours of it being flagged by the government or ordered by a court.
The notification also bars digital platforms from allowing the removal or suppression of AI labels or associated metadata once such identifiers have been applied. This means that any content marked as AI-generated cannot be altered to hide its synthetic origin.
The guidelines mandate that social media intermediaries deploy automated tools and technical safeguards to detect and prevent the circulation of illegal, sexually exploitative, misleading or deceptive AI-generated content. Platforms are also required to act swiftly if they become aware of violations involving the creation, hosting, sharing or dissemination of such synthetic information.
The Ministry of Electronics and Information Technology has further directed platforms to regularly inform users about the consequences of misusing AI. According to the order, intermediaries must notify users at least once every three months, in simple language, through their rules, privacy policies or user agreements.
The updated framework requires platforms to adopt “reasonable and appropriate technical measures,” including automated systems, to prevent users from creating or sharing AI-generated content that violates existing laws. These include the Bharatiya Nyaya Sanhita, 2023, the Protection of Children from Sexual Offences Act, 2012, and the Explosive Substances Act, 1908.
The draft rules also seek to make user disclosure mandatory while posting AI-generated or modified content. Platforms will be required to use technology to verify such declarations and ensure compliance.
Several social media companies have already introduced features that allow users to label content created or altered using artificial intelligence. The new guidelines aim to standardise these practices and make them legally enforceable across platforms.
Our Thoughts
The government’s move reflects growing concern over the misuse of artificial intelligence, particularly deepfakes that can mislead, defame or cause real-world harm. By enforcing strict labelling, rapid takedown timelines and automated detection, the guidelines attempt to balance innovation with accountability. The real test will lie in enforcement and transparency, especially as AI-generated content becomes harder to distinguish from reality. If implemented effectively, the rules could mark a significant step toward safer digital spaces without curbing legitimate creative expression.
