OpenAI, Microsoft Join UK-Led Global AI Safety Coalition with New Funding

OpenAI and Microsoft have joined a UK-led international coalition and committed new funding to the UK AI Security Institute's Alignment Project. The initiative aims to ensure advanced AI systems remain safe, secure, and under human control as the technology integrates into public services. UK officials emphasized that public trust is crucial for unlocking AI's full benefits and that alignment research tackles this barrier directly. The project has already awarded grants to numerous international research efforts and is supported by a coalition of global institutes and companies.

Key Points: OpenAI & Microsoft Join UK AI Safety Coalition

  • New funding for AI alignment research
  • Focus on safety and human control
  • Aims to build public trust in AI
  • Coalition includes global partners
3 min read

OpenAI, Microsoft join UK-led global coalition to safeguard AI development

OpenAI and Microsoft join UK's global AI safety coalition, funding alignment research to ensure advanced AI remains secure and under human control.

"AI offers us huge opportunities, but we will always be clear-eyed on the need to ensure safety is baked into it from the outset. - David Lammy"

New Delhi, February 20

OpenAI and Microsoft have joined the United Kingdom's international coalition to safeguard artificial intelligence development. The technology companies have committed new funding to the UK AI Security Institute's flagship Alignment Project to ensure advanced AI systems remain safe, secure, and under human control.

The announcement was made by UK Deputy Prime Minister David Lammy and AI Minister Kanishka Narayan at the AI Impact Summit in New Delhi. This development brings the total funding available for AI alignment research to more than GBP 27 million. OpenAI has provided an additional GBP 5.6 million to the fund, which also receives backing from Microsoft and other international partners.

The Alignment Project focuses on AI alignment, a field of research dedicated to steering advanced AI systems to reliably act as intended without unintentional or harmful behaviours. The initiative aims to build public trust as AI technology is integrated into public services and national infrastructure. Progress in this field supports the adoption of systems that increase productivity, reduce medical scan times, and create new jobs.

UK Deputy PM Lammy said, "AI offers us huge opportunities, but we will always be clear-eyed on the need to ensure safety is baked into it from the outset. We've built strong safety foundations which have put us in a position where we can start to realise the benefits of this technology. The support of OpenAI and Microsoft will be invaluable in continuing to progress this effort."

The project has already awarded grants to 60 research efforts across eight countries, with a second round of funding scheduled to open this summer. Officials stated that without continued progress in alignment research, increasingly powerful AI models could act in ways that are difficult to anticipate or control, potentially posing challenges for global safety and governance.

UK AI Minister Kanishka Narayan said, "We can only unlock the full power of AI if people trust it - that's the mission driving all of us. Trust is one of the biggest barriers to AI adoption, and alignment research tackles this head-on. With fresh backing from OpenAI and Microsoft, we're supporting work that's crucial to ensuring AI delivers its huge benefits safely, confidently and for everyone."

The international coalition supporting the initiative includes the Canadian Institute for Advanced Research, the Australian AI Safety Institute, Schmidt Sciences, Amazon Web Services, and Anthropic. The project is led by an expert advisory board that includes researchers such as Yoshua Bengio, Zico Kolter, Shafi Goldwasser, and Andrea Lincoln.

Mia Glaese, VP of Research at OpenAI, said, "As AI systems become more capable and more autonomous, alignment has to keep pace. The hardest problems won't be solved by any one organisation working in isolation - we need independent teams testing different assumptions and approaches. Our support for the UK AI Security Institute's Alignment Project complements our internal alignment work and helps strengthen a broader research ecosystem focused on keeping advanced systems reliable and controllable as they're deployed in more open-ended settings."

The Alignment Project combines grant funding for research with access to compute infrastructure and academic mentorship from scientists at the AI Security Institute. The UK government intends to use these resources to drive progress in safe AI that behaves predictably.

- ANI

Share this article:

Reader Comments

R
Rohit P
Good to see international cooperation on this. AI safety is a global issue, no single country can handle it alone. The mention of reducing medical scan times is exciting - if safe AI can improve healthcare in rural India, that's a huge win.
K
Karthik V
GBP 27 million sounds like a lot, but is it really enough for a problem of this scale? These companies spend billions on development. The funding seems tokenistic. True commitment would be a much larger share of their R&D budget dedicated to safety.
A
Anjali F
Trust is indeed the biggest barrier. After seeing so much deepfake misuse during elections, people are rightfully wary. Projects like this that bake in safety from the start are crucial. Hope the research findings are shared openly and not kept proprietary.
D
David E
Interesting to see the UK taking a lead here with Indian involvement. The collaborative model with grants across 8 countries is smart. Diversity of thought will be key to solving alignment. The advisory board looks stellar.
S
Siddharth J
As a developer, I'm glad the focus is on alignment. We build tools fast, but often don't fully consider long-term consequences. Having a framework for "reliably acting as intended" will help Indian startups too as we adopt these advanced models. More power to the initiative! 💻

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50