Experts Applaud India's Revised AI Deepfake Rules Focusing on Misleading Content

Legal experts have welcomed the Indian government's amended guidelines on AI-generated deepfakes, noting a narrowed focus on misleading content rather than all synthetic material. The revised IT rules mandate social media platforms to clearly label AI-generated content and impose a strict three-hour deadline for its removal once flagged. The amendments also prohibit platforms from allowing the removal of applied AI labels and require automated tools to detect harmful synthetic content. The changes are seen as more reasonable for intermediaries compared to earlier proposed mandates.

Key Points: India's New AI Deepfake Guidelines: Experts Welcome Focus on Misleading Content

  • Focus on misleading AI content, not all synthetic material
  • 3-hour takedown deadline for flagged deepfakes
  • Mandatory clear labelling or embedded metadata
  • Bars removal of AI labels once applied
2 min read

Experts hail revised AI deepfake guidelines that focus on misleading content

Legal experts hail India's revised AI deepfake rules, which require clear labelling and a faster 3-hour takedown for misleading synthetic content.

"I think intermediaries will be happy with the reasonable efforts expectation rather than the earlier proposed visible labelling. - Sajai Singh"

New Delhi, Feb 11

Legal experts have welcomed the government's amended guidelines on AI-generated deepfakes, saying that social media intermediaries will be happy with the reasonable efforts expectation rather than the earlier proposed visible labelling.

The IT Ministry has issued updated guidelines for social media intermediaries like Facebook, Instagram and YouTube, directing them to clearly label all AI-generated content and ensure that such synthetic material carries embedded identifiers.

The MeitY amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, allows regulators and the government to monitor and control synthetically generated information (SGI), including deepfakes. AI-generated or altered content is to be labelled or identified, either through visible disclosures or embedded metadata, so that a user views and consumes content in an informed manner.

"Interestingly, the amendments narrow the scope of what is to be flagged, compared to the earlier draft released by MeitY, with a focus on misleading content rather than everything that has been artificially or algorithmically created, generated, modified or altered," said Sajai Singh, Partner, JSA Advocates & Solicitors.

On the other hand, the government has set a three-hour deadline for social media platforms to take down AI-generated deepfake content, from an earlier 36-hour deadline, once it is flagged by the government or ordered by a court.

"I think intermediaries will be happy with the reasonable efforts expectation rather than the earlier proposed visible labelling," said Singh.

The revised norms also bar digital platforms from allowing the removal or suppression of AI labels or associated metadata once they have been applied. The social media companies will be required to deploy automated tools to detect and prevent the circulation of illegal, sexually exploitative or deceptive AI-generated content, according to the latest MeitY order.

- IANS

Share this article:

Reader Comments

R
Rohit P
Finally! We needed this. Deepfakes during elections are a huge threat. The embedded metadata is a smart idea - harder for bad actors to remove. Hope platforms comply properly and don't find loopholes.
A
Aman W
While the intent is good, I'm a bit skeptical about the "reasonable efforts" clause. It sounds vague and might let big tech companies off the hook. Who defines what is reasonable? The enforcement will be key.
S
Sarah B
As someone working in tech, I appreciate the focus on misleading content. Not all AI-generated content is bad - think of creative filters or educational tools. Labelling everything would have been overkill and stifled positive uses.
K
Karthik V
Good step forward. My only concern is about the average user. Will my mother know how to check for embedded metadata? Public awareness campaigns are needed alongside these technical rules.
N
Nikhil C
The three-hour deadline is impressive but is it realistic for smaller platforms with limited resources? Hope the government provides some support or clear benchmarks so compliance is fair across the board.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50