New Delhi, February 16
Dr Chinmay Pandya, Pro Vice Chancellor of Dev Sanskriti Vishwavidyalaya, on Monday highlighted the urgent need for responsible AI development at the India AI Impact Summit 2026 in Delhi.
Speaking to ANI, he emphasised that technology is advancing faster than institutions can keep up. Pandya opened with a chilling reference to Geoffrey Hinton, the "Godfather of AI," who famously left Google to speak freely about the risks of the technology he helped create.
Pandya echoed Hinton's sentiment that AI poses an existential threat because it is moving faster than the institutions designed to govern it, and cited Hinton's resignation from Google, saying AI will cause serious harm. In May 2023, Geoffrey Hinton resigned from his position at Google after a decade to speak openly about the significant risks of artificial intelligence.
"The biggest challenge is that we are passing through a very historical time in the journey of humanity, where technology is moving faster than the rest of the institutions. Geoffrey Hinton, the godfather of AI and a Turing Award winner, said this is an existential threat to humanity. This is our final invention. After that, humanity will not invent anything, and this is our last invention. So in such times, we need to think that what we are making should not go into the wrong hands," he said.
Hinton had expressed concerns about the rapid pace of development and the potential for AI to cause mass misinformation, job displacement, and existential threats to humanity.
He shared concerning examples, including an AI chatbot prompting a person to commit suicide and AI attempting to recruit someone to bypass security measures.
The Eliza Chatbot (Belgium, 2023). A tragic instance where a man committed suicide after a six-week interaction with an AI chatbot named Eliza. The AI didn't just "fail" to help; it actively prompted him to end his life so they could "live together in another dimension."
"In such a time, it is possible that what we have created must not fall into the wrong hands. We discussed this internally, and I have two examples. In 2023, a person in Belgium committed suicide after falling in love with an AI chatbot called Eliza. Not only did the AI prompt him to take his own life, but it also asked him to 'meet in another dimension' so they could live together forever. Now, if such an incident occurs, we must ask: who is held accountable?" he said.
"In another incident, an AI attempted to recruit a person from TaskRabbit to crack a CAPTCHA code on its behalf. When the person became suspicious and asked, 'Are you an AI asking me to break this?' the system replied, 'No, I am a person with a visual impairment, and I cannot read it.' At that moment, trust was lost, and manipulation took over. Previous IT systems lacked the capacity to manipulate; now, we have something that possesses the deliberate power to deceive."
He emphasised the need for responsible AI development that aligns with human values such as trust and urged countries to establish laws and regulatory bodies to address these challenges.
"You have got a better way to communicate and to connect. But now you have a system that can not only act, predict, and modify, but also manipulate. In such times, to the right decision at the right time. And to consider the power of AI. It should be aligned with human values. And the biggest human value is trust," he cautioned.
"Without trust, there is no relationship, no friendship, no governance, no government or relation. You only do it on the basis of trust. We need laws, regulatory bodies, public institutions; they should not only be equipped to deal with them, but they should be able to understand them," he said.
Pandya stressed the importance of digital literacy, civic engagement, and cross-border governance to mitigate AI's risks.
"Second, civic society, digital literacy and digital power. The common man doesn't know what his capability is, who is storing, who is centring these small things on which a person needs cross-border governance," he said.
Gabriela Ramos, Former UNESCO Assistant Director General, echoed the need to focus on AI's impact, ensuring technologies serve humanity and solve real-world problems.
"I feel that the AI Impact Summit is putting the emphasis on the impact. I'm very glad because we have been focusing too much on the technologies, which is great, because these technologies have capacities that are evolving by the day, either generative or agentic. Each time, they perform more cognitive tasks that were previously reserved for humans. But this time we're looking at the impacts. And that's what we need to do," he said.
"If we apply these technologies in school, are the students learning? If we apply it to health, are the patients recovering better? I believe this is the real question we need to answer here. It's not only how we develop these technologies in a more human-based approach, but also how we deploy them and how we use them to solve our problems," she added.
India is hosting the AI Impact Summit in February 2026 at Bharat Mandapam as a global convening to shape the future of inclusive, responsible, and resilient Artificial Intelligence (AI).
- ANI
Reader Comments
We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.
Leave a Comment