India Mandates AI-Generated Content Labels to Combat Deepfake Threats

The Indian government has amended the IT Rules, 2021, making it mandatory to label AI-generated synthetic content. Intermediaries and platforms must ensure such material carries clear labels and, where feasible, embed permanent metadata for tracing its origin. The rules introduce formal definitions for synthetic content and impose stricter due-diligence obligations, including faster grievance redressal and a three-hour window for certain content takedowns. These amendments, aimed at combating deepfakes and AI-driven misinformation, will come into force on February 20, 2026.

Key Points: India Mandates AI Content Labels to Fight Deepfakes

  • Mandatory labeling for AI content
  • Platforms must embed traceable metadata
  • Faster 3-hour takedown orders
  • New definitions for synthetic media
  • Rules effective from February 2026
2 min read

Government makes it mandatory to label AI-generated content to counter deepfake

New IT rules require clear labeling of AI-generated synthetic content and faster takedowns to combat deepfakes and misinformation.

"Synthetically generated information... appears to be real, authentic or true - MeitY Notification"

New Delhi, February 10

The Union Government has notified amendments to the Information Technology Rules, 2021, which make it mandatory to label AI-generated content.

Intermediaries offering tools that enable the creation or dissemination of "synthetic content" must ensure such material carries a clear and prominent label. Where technically feasible, platforms are also required to embed permanent metadata or provenance identifiers to trace the origin of such content, the Ministry of Electronics and Information Technology (MeitY) said in a notification.

The amendments also introduce formal definitions for "audio, visual or audio-visual information" and "synthetically generated information," covering content that is artificially created or altered using computer resources in a manner that appears realistic or indistinguishable from real persons or events.

Routine editing, accessibility improvements, and good-faith formatting, however, have been excluded from this definition.

"Synthetically generated information means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event," it said.

The amendments aim to address the growing risks posed by deepfakes and AI-driven misinformation, while balancing innovation with user safety and accountability. Non-compliance may attract penalties under the Information Technology Act, 2000, and other applicable criminal laws, it said.

The rules place enhanced due-diligence obligations on intermediaries, particularly prominent social media platforms. These include deploying automated tools to prevent the generation or circulation of unlawful synthetic content such as child sexual abuse material, misleading impersonations, or false electronic records.

"...includes any such synthetically generated information that "contains child sexual exploitative and abuse material, non-consensual intimate imagery content, or is obscene, pornographic, paedophilic, invasive of another person's privacy, including bodily privacy, vulgar, indecent or sexually explicit," the notification said.

Platforms must also require users to declare whether uploaded content is synthetically generated and verify such declarations, it said.

Timelines for compliance have been sharply reduced. Intermediaries must now act within three hours of receiving lawful takedown orders in certain cases, while grievance redressal and response timelines have also been shortened.

The new rules, issued by the Ministry of Electronics and Information Technology will come into force on February 20, 2026.

- ANI

Share this article:

Reader Comments

R
Rohit P
Finally! Deepfakes are a serious threat, especially during election season. The 3-hour takedown rule for unlawful content is crucial. Hope the enforcement is strict. Our elders in the family often believe anything they see online.
A
Aman W
While the intent is good, I'm concerned about the implementation. Will small Indian startups creating AI tools be able to comply with all these technical requirements? The cost might be high. The government should provide some support or guidelines.
S
Sarah B
The exclusion for routine editing and accessibility is a sensible detail. It shows they've thought this through and aren't just making a blanket rule. Protecting privacy, especially for women and children, is paramount. Hope this reduces online harassment.
V
Vikram M
Giving until 2026 to comply is smart. It gives companies time to build the tech. But the real test will be user awareness. Just putting a label won't help if people don't understand what it means. Need a big public education campaign too.
K
Karthik V
Respectfully, I have to ask: will this apply equally to all content creators? Sometimes satirical or parody content uses AI for effect. The line between "misleading" and creative expression can be thin. The rules must not stifle legitimate creativity.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50