AI Content Rules: Why Creators Warn Against Blanket Labelling Mandates

A recent roundtable revealed strong industry pushback against India's draft IT rules for AI-generated content. Creators are worried that mandatory labels for all AI-enhanced work could unfairly damage their hard-earned credibility. Legal and platform experts argue the rules need a smarter, risk-based approach instead of a one-size-fits-all mandate. The consensus is that regulation should target harmful deepfakes without stifling everyday creative innovation.

Key Points: Stakeholders Flag Concerns Over Draft IT Rules on AI Content

  • Stakeholders warn draft rules risk clubbing routine AI tools with high-risk synthetic media
  • Creators argue excessive labelling could damage personal credibility and trust
  • Legal experts say rules lack a differentiated, risk-graded approach to regulation
  • Platforms note mature global jurisdictions favor principle-based, not rigid, AI rules
3 min read

Stakeholders flag concerns over blanket labelling in draft IT rules on synthetically generated information

Creators, legal experts, and platforms warn India's draft AI content rules risk harming digital trust and innovation with blanket labelling requirements.

"The absence of risk grading results in overbroad mandates that treat all content with suspicion. - Akshat Agarwal, AASA Chambers"

New Delhi, December 11

A cross-section of creators, legal experts, brand representatives and digital platforms on Monday raised strong objections to what they termed "blanket labelling" requirements in the Draft IT Rules on Synthetically Generated Information (SGI), urging the government to adopt a more transparent, risk-tiered regulatory framework.

According to a press release issued by the organisers, the observations were made at a closed-door roundtable convened by The Dialogue, a New Delhi-based tech policy think tank, to examine the feasibility and legal viability of the Draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025.

Participants warned that the current formulation risks clubbing routine AI-enabled creative processes with high-risk synthetic media. Creators argued that the digital economy is built on personal credibility, and excessive labelling could damage that trust.

"There is a clear difference between AI-authored content and AI-enhanced content. Almost everything in our industry is AI-enhanced now, but my mileage as a creator is still built on trust... If every video I make ends up with an 'AI' banner just because I used captions or a clean-up tool, my credibility is at stake," content creator Tuheena Raj said, stressing that strong labels should apply mainly to "finance, health, political messaging, deepfakes - not... routine, low-risk enhancements."

Representatives from the advertising sector noted that AI is already deeply integrated into scriptwriting, editing, localisation, and testing workflows. They cautioned that unclear provisions might enable "liability dumping", pushing compliance burdens onto smaller creators and agencies.

Platform representatives drew parallels with global regulatory trajectories, noting that even mature jurisdictions lean towards principle-based, risk-graded AI rules rather than rigid, format-specific mandates.

"We work across multiple jurisdictions... Even in those mature' territories, you don't yet see such detailed rules on how every piece of synthetic media must be tagged," said Shivani Singh of Glance (InMobi Group). She questioned whether "blanket labelling will actually solve the deepfake problem we are worried about."

Legal experts argued that the Draft Rules conflate transparency with harm prevention and lack a differentiated approach to risk. "The absence of risk grading results in overbroad mandates that treat all content with suspicion," said Akshat Agarwal of AASA Chambers, adding that labelling could become "a blunt instrument that penalises innovation without meaningfully curbing harm."

Across the discussion, stakeholders emphasised the need for clearer definitions, exemptions for routine or accessibility-related AI uses, and interoperable provenance standards rather than heavy detection obligations. They stressed the importance of frameworks that protect against deception without undermining legitimate creative expression.

- ANI

Share this article:

Reader Comments

P
Priya S
Finally, some sense! I'm a small-time creator and use AI tools for subtitles, background noise removal, and colour correction. If my videos get a scary "AI-Generated" label, my audience will think I'm fake. My channel's growth is based on trust. The government needs to understand the creator economy before making such broad rules. 🙏
R
Rohit P
Good points raised. We can't regulate like China. A balanced approach is needed. Label deepfakes and synthetic media used for news/politics, but leave the creative and editing tools alone. Otherwise, India will fall behind in the global digital race. Jai Hind!
S
Sarah B
Working in tech policy, I see this globally. The EU's AI Act is risk-based. Blanket labelling is inefficient and impossible to enforce. It will hurt small businesses the most. The government should listen to these experts and revise the draft.
K
Karthik V
I respectfully disagree with the complete dismissal of labels. As a common user, I want to know if what I'm watching is fully AI-made or just enhanced. Transparency is key. But yes, the rule should be smart - maybe a small icon for enhancements and a clear warning for deepfakes. Don't throw the baby out with the bathwater.
M
Meera T
"Liability dumping" is a real fear for agencies like ours. Big platforms will pass on all compliance costs to us. The government must provide clear definitions and safe harbours for low-risk use. We support regulation, but it must be practical

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50