US and UK ministers meet to establish a bilateral agreement on AI safety

April 2, 2024
  • The US Commerce Secretary and UK Tech Minister established a plan surrounding AI safety
  • This Memorandum of Understanding (MOU) agrees to share AI research and policy information
  • It follows the UK's AI Safety Summit, last November, which established international agreements on AI

The UK and the US established a Memorandum of Understanding (MOU) on AI safety.

US Commerce Secretary Gina Raimondo and UK Tech Minister Michelle Donelan shook on the bilateral agreement, with Minister Donelan describing AI as “the defining technology challenge of our generation.” 

The partnership builds on the commitments made during the AI Safety Summit at Bletchley Park in November 2023.

This groundbreaking summit convened leading figures in the AI industry and political representatives from multiple nations, including a rare collaboration between the US and China. 

The summit also led to the creation of “AI Safety Institutes” in the UK and the US dedicated to evaluating open- and closed-source AI systems. 

Secretary Raimondo spoke of the agreement, “It will accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.” 

Raimondo continued, “Our partnership makes clear that we aren’t running away from these concerns – we’re running at them.”

With this new agreement, UK and US researchers will run joint safety evaluations, conduct joint testing exercises, “red teaming,” and share expertise. 

Donselan was hopeful about creating a safe path forward for AI, stating, “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives, emphasizing the global impact of the audience’s role in AI safety.”

AI industry regulation has become stitched together by a patchwork of voluntary agreements and frameworks.

For instance, last year, the Biden administration expanded its voluntary safety framework to tech companies such as Adobe, IBM, Nvidia, and Salesforce, which joined existing participants like Google, Microsoft, and OpenAI. 

Such voluntary commitments are stacking up, but many are dubious of their efficacy. Talk is cheap, and the tech industry hasn’t been traditionally successful in self-regulating. 

The UK still largely lacks AI regulation

This bilateral agreement could be critical for the UK, which lacks almost any form of AI regulation in the absence of the European Union’s AI Act.

The AI Act, which begins its phased rollout this year, mandates transparency and risk assessment for AI systems. The UK doesn’t automatically opt into EU regulations post-Brexit and has been sluggish in establishing its own rules.

Prime Minister Rishi Sunak wanted to promote a “pro-innovation” framework in the UK, hinting at a deregulated environment. 

This US-UK agreement differentiates the UK from the EU’s regulatory environment. Sunak and Chancellor Jeremy Hunt had previously discussed building a US-inspired tech industry a British equivalent of Silicon Valley.

However, thus far, the UK has failed to create a generative AI startup comparable to any in the US, nor France’s Mistral, or Germany’s Aleph Alpha.

Join The Future


Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.


Stay Ahead with DailyAI


Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.


*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions