AI in Healthcare Needs Expert Co-Creation and Validation, Says Niti Aayog

Niti Aayog member Dr VK Paul emphasized that healthcare AI innovators must co-create solutions with medical experts and ensure rigorous validation for system adoption. He stressed aligning innovation with legal frameworks to build a culture of responsible adoption. Panelists, including a Dutch diplomat and a leading radiologist, highlighted the need for human-centered deployment and ethical use under professional supervision. They noted that AI is already embedded in modern diagnostic equipment, improving scan quality and safety.

Key Points: Niti Aayog on AI in Healthcare: Co-Creation and Validation Key

  • Innovators must co-create AI with health experts
  • Validation and regulatory compliance are essential
  • AI must be used ethically under professional supervision
  • Goal is responsible adoption, not innovation in isolation
  • Modern medical equipment already incorporates beneficial AI
4 min read

Innovators must co-create AI with healthcare experts, ensure validation: Niti Aayog member Dr VK Paul

Niti Aayog's Dr VK Paul urges innovators to co-create AI tools with health experts, stressing validation and ethical use for system adoption.

"Please co-create with health technical partner very often. - Dr VK Paul"

New Delhi, February 17

Niti Aayog member Dr VK Paul on Monday urged innovators in the healthcare sector to co-create artificial intelligence solutions with medical and public health experts, stressing that validation and regulatory compliance are essential for adoption within the health system.

Addressing a session at the AI Impact Summit in the national capital, Paul said that innovators must frequently collaborate with health technical partners, including clinicians, biomedical scientists, microbiologists, clinical researchers, public health experts, pathologists and radiologists.

"I want to give a message here. My first request to those who wish to innovate, please co-create with health technical partner very often. And I review this, whoever sends me, or I come across a product, AI product, I say, to make a presentation. There is something known as sensitivity, specificity, at least today, we are using those matrices. Tomorrow, we may not. It may be a clinician, it could be a biomedical scientist, it may be a microbiologist. It could be a clinical researcher, clinical trial person, or a public health person, and so on and so forth. Please, a pathologist, radiologist, done the right thing," Dr VK Paul said.

He emphasised that healthcare innovation should not happen in isolation and must align with existing legal and regulatory frameworks. According to him, the goal is to build a culture of responsible adoption rather than innovation without oversight. Paul further stated that any validated tools presented to the authorities would be absorbed into the system after undergoing due assessment.

"We wish to work in the direction, use it by the law, we want a culture of adoption. I already stated, whatever is validated, and give me those five tools that are validated, we'll absorb into the system. We'll take them through the health technology assessment, make it available as a public good," he said.

Meanwhile, the Counsellor for Health, Welfare and Sport at the Embassy of the Netherlands, Nico Schiettekatte, explained that innovation should not only focus on developing technologies in a human-centred way but also on how they are deployed and used to solve real problems. According to him, it is crucial to evaluate whether technological interventions actually improve outcomes for the people they are intended to serve.

"We're just researching for humans, but this time, we're looking at the impacts, and that's what we need to do. So if we apply these technologies for schools, are the students learning? If we apply it for health, are the patients recovering better? Therefore, I feel this is the real question we are now to answer. It's not about only how do we develop these technologies in a more human based approach, but also how do we deploy it and how we use it to solve our problems," he said.

Dr Harsh Mahajan, Radiologist and Founder and Managing Director of Mahajan Imaging, who was also part of the panel discussion, stressed that artificial intelligence (AI) in healthcare can be highly beneficial, but only when used ethically and under the supervision of trained healthcare professionals. He cautioned against laypeople entering personal health data into platforms like ChatGPT for self-diagnosis, describing it as potentially risky.

"I feel, and I make this statement very responsibly, that if used ethically, if used properly, AI in healthcare can only be beneficial, especially if used under supervision of healthcare professionals and not by lay public at large, where they feed their data into ChatGPT or whatever, and try to figure out what's happening," Mahajan said.

He further noted that modern medical equipment, including CT, MRI, and ultrasound machines, already incorporates AI. This technology helps reduce radiation exposure in CT scans, produce faster and higher-quality scans in MRI and ultrasound, and automatically recognise lesions and other abnormalities.

"So that's very important, then, as has already been said, that you know already, our CT MRI ultrasound equipments have AI inside, those reduction of radiation on CT scans, faster, higher quality scans on MRI and ultrasound, and actually automatic recognition of lesions," said Mahajan.

The India AI Impact Summit is a five-day programme anchored in three foundational pillars, or "Sutras": People, Planet, and Progress.

- ANI

Share this article:

Reader Comments

R
Rohit P
Finally someone said it! My cousin is a radiologist and he keeps complaining about fancy AI tools that look great in demos but fail in real clinical settings. Validation with Indian patient data is non-negotiable. Jai Hind to Dr. Paul for this clear message.
A
Aman W
While I agree with the co-creation part, I hope this doesn't become another bureaucratic hurdle. The 'regulatory frameworks' need to be agile and supportive, not just another layer of red tape that kills innovation. We need speed to solve India's healthcare challenges.
S
Sarah B
Dr. Mahajan's warning about ChatGPT for self-diagnosis is spot on. I've seen so many friends in Delhi doing this. It's dangerous. AI should assist doctors, not replace common sense. The focus on 'public good' and making validated tools available widely is the right approach.
K
Karthik V
The Dutch counsellor's point is key: "are the patients recovering better?" That's the ultimate test. We have brilliant minds in IITs and AIIMS. If they collaborate, India can lead in affordable, effective AI health tools for the world. Bharat can be a global hub for this.
M
Meera T
As someone who lost a family member to a delayed diagnosis in a tier-2 city, I truly hope this happens. AI that helps detect diseases early in our district hospitals could save countless lives. But it must be built *with* the doctors who work there, not just *for* them.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50