India's AI Breakthrough: Why Nasscom Praises Flexible Governance Over Control

India's final AI Governance Guidelines have received strong praise from Nasscom for their balanced approach. The framework emphasizes coordination over control through multiple governance bodies working together. It focuses on evidence-based risk management with voluntary measures and graded liability systems. The guidelines explicitly state that no separate AI law is needed at this stage, reflecting industry recommendations.

Key Points: Nasscom Applauds India's Flexible AI Governance Guidelines Framework

  • Guidelines establish AI Governance Group for coordination without over-centralization
  • Sectoral regulators maintain enforcement lead to balance flexibility and accountability
  • Voluntary measures and graded liability form evidence-based risk approach
  • Seven ethical principles guide transparent AI deployment across industries
2 min read

India's AI guidelines favour coordination over control: Nasscom

India's new AI guidelines prioritize coordination over control with evidence-based risk management and sectoral oversight, earning Nasscom's strong endorsement.

"The legal reform track mirrors our recommendation to rely on existing statutes, identify real gaps, and undertake targeted amendments before considering any new horizontal law - Nasscom"

New Delhi, Nov 7

India’s final AI Governance Guidelines opt for coordination over control, setting out an agile, principle-based framework that supports innovation while managing risk through practical, evidence-led tools, Nasscom, the premier trade association for the IT industry, said on Friday, praising the new guidelines.

The proposed architecture, comprising the AI Governance Group (AIGG), Technology and Policy Expert Committee (TPEC), and the AI Safety Institute (AISI), enables effective coordination and a whole-of-government approach without creating an over-centralised regulator.

According to Nasscom, the Guidelines’ emphasis that sectoral regulators remain in the lead on enforcement and oversight reflects a deliberate effort to preserve the balance between flexibility and accountability.

"On risk mitigation, the Guidelines have absorbed the call for proportionality and evidence-based governance-voluntary measures, graded liability, and a non-punitive AI incidents system forms the backbone of this approach," the apex IT trade body said.

The direction is pragmatic, learn from actual incidents, iterate governance tools, and avoid regulating hypothetical harms, it added.

"The legal reform track mirrors our recommendation to rely on existing statutes, identify real gaps, and undertake targeted amendments before considering any new horizontal law," Nasscom noted.

The not-for-profit trade body further said that the guidelines’ explicit statement that “a separate AI law is not needed at this stage” is a near-verbatim reflection of our position.

India’s final AI Governance Guidelines are, in effect, an operationalisation of the balanced, innovation-centred model that the industry, led by Nasscom, has consistently proposed.

They succeed in embedding flexibility, shared responsibility, and evidence-based risk management at the policy level, Nasscom noted.

Earlier, the government unveiled the India AI Governance Guidelines under the IndiaAI Mission, providing a framework to ensure safe, inclusive, and responsible adoption of the frontier technology across sectors.

The launch marks a key milestone ahead of the India–AI Impact Summit 2026, as India strengthens its leadership in responsible AI governance, said the Ministry of Electronics and Information Technology.

The guidelines outlined seven ethical principles, recommendations across six governance pillars, an action plan with short, medium, and long-term timelines, and practical guidance for industry, developers, and regulators to ensure transparent and accountable AI deployment.

- IANS

Share this article:

Reader Comments

R
Rohit P
Finally some sensible thinking! The voluntary measures and graded liability approach makes sense for India's diverse startup ecosystem. Too much regulation would have killed innovation at birth. Good job by Nasscom and MeitY!
D
David E
As someone working in AI research in Bangalore, I appreciate the evidence-based approach. Learning from actual incidents rather than hypothetical harms is exactly what we need. Hope this helps India become a global AI leader! 🚀
A
Ananya R
While I appreciate the balanced approach, I'm concerned about enforcement. "Voluntary measures" sound good on paper, but will companies actually comply without proper oversight? Hope the sectoral regulators are equipped to handle this responsibility.
S
Sarah B
The seven ethical principles and practical guidance for developers is exactly what our AI startup needed. Clear guidelines without excessive red tape will help us innovate responsibly. Looking forward to seeing how this unfolds!
V
Vikram M
Great to see India taking a pragmatic approach to AI governance! The coordination between AIGG, TPEC, and AISI should ensure comprehensive oversight without bureaucracy. This could become a model for other developing countries. 👍

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50