UK Secures Access to Models from OpenAI, DeepMind, and Anthropic

June 14, 2023

AI UK Sunak

London, United Kingdom – Prime Minister Rishi Sunak said last week at London Tech Week that OpenAI, Google DeepMind, and Anthropic have promised to provide “early or priority access” to their AI models. 

The move intends to assist research in the field of artificial intelligence focused on assessment and safety. This declaration follows the UK government’s plan to convene a worldwide AI safety meeting.

Prime Minister Sunak expressed his commitment to AI safety research, saying, “We’re going to do cutting-edge [AI] safety research here in the UK.” We are investing more cash in AI safety than any other government, including £100 million for our expert task team.” 

He went on to explain, “We’re collaborating with the frontier laboratories – Google DeepMind, OpenAI, and Anthropic. And I’m glad to say that they have agreed to provide early or priority access to models for research and safety reasons in order to assist us construct better evaluations and understand the potential and hazards of these systems.”

Prime Minister Sunak drew a link with the COP Climate conferences in order to promote the UK as a worldwide center for AI safety regulation. “Just as we unite through COP to tackle climate change, so the UK will host the first-ever Summit on global AI Safety later this year,” he said of the impending meeting. “I aim to make the United Kingdom not just the intellectual but also the geographical home of worldwide AI safety regulation.”

This current emphasis on AI safety reflects a substantial government attitude shift. In the past, the UK government favored a pro-innovation posture and flexible principles for AI governance, dismissing the need for technical regulations or monitoring agencies.

However, with growing worries about the possible perils and control of AI, Downing Street has quickly reevaluated its plan.

While AI firms’ promise to provide enhanced access to their models gives a chance for the UK to lead in AI review and audit approaches, there is a danger of industry influence over the country’s AI safety measures. 

AI giants could be able to shape future AI rules that pertain to their own enterprises by molding the debate around AI safety research and influencing issue prioritization.

To ensure the generation of reliable and rigorous results, the UK government’s AI safety programs must include independent researchers, civil society organizations, and people most at risk of damage from automation.

Ethicists have emphasized the need of addressing actual real-world injustices created by AI technology, including prejudice, discrimination, invasion of privacy, violation of intellectual property, and environmental exploitation.

As the United Kingdom moves forward with its AI safety agenda, combining industry participation with independent supervision will be critical to developing a responsible and inclusive AI ecosystem that protects against possible hazards while maximizing the advantages of artificial intelligence.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Jay Parmar

Jay is a technical writer and Computer Science graduate. He is a long-term AI and ML enthusiast and writes for various platforms like HashDork, TutorialsPoint. Off-duty, Jay simplifies tech and freelances.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions