Tech companies across the globe commit to fresh set of voluntary rules

May 21, 2024

  • 16 international tech companies implemented a new safety framework
  • This comes as eminent researchers warn of AI's potential "extreme risks"
  • At the Seoul AI Safety Summit, 10 nations set up an AI safety network
AI safety

Leading AI companies have agreed to a new set of voluntary safety commitments, announced by the UK and South Korean governments before a two-day AI summit in Seoul.

16 tech companies opted into the framework, including Amazon, Google, Meta, Microsoft, OpenAI, xAI, and Zhipu AI.

It’s the first framework agreed upon by companies in North America, Europe, the Middle East (The Technology Innovation Institute), and Asia, including China (Zhipu AI). 

Among the commitments, companies pledge “not to develop or deploy a model at all” if severe risks can’t be managed.

Companies also agreed to publish how they’ll measure and mitigate risks associated with AI models.

The new commitments come after eminent AI researchers, including Yoshua Bengio, Geoffrey Hinton, Andrew Yao, and Yuval Noah Harari, published a paper in Science named Managing extreme AI risks amid rapid progress.

That paper made several recommendations which helped guide the new safety framework:

  • Oversight and honesty: Developing methods to ensure AI systems are transparent and produce reliable outputs.
  • Robustness: Ensuring AI systems behave predictably in new situations.
  • Interpretability and transparency: Understanding AI decision-making processes.
  • Inclusive AI development: Mitigating biases and integrating diverse values.
  • Evaluation for dangerous actions: Developing rigorous methods to assess AI capabilities and predict risks before deployment.
  • Evaluating AI alignment: Ensuring AI systems align with intended goals and do not pursue harmful objectives.
  • Risk assessments: Comprehensively assessing societal risks associated with AI deployment.
  • Resilience: Creating defenses against AI-enabled threats such as cyberattacks and social manipulation.

Anna Makanju, vice president of global affairs at OpenAI, stated about the new recommendations, “The field of AI safety is quickly evolving, and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science. We remain committed to collaborating with other research labs, companies, and governments to ensure AI is safe and benefits all of humanity.”

Michael Sellitto, Head of Global Affairs at Anthropic, commented similarly, “The Frontier AI safety commitments underscore the importance of safe and responsible frontier model development. As a safety-focused organization, we have made it a priority to implement rigorous policies, conduct extensive red teaming, and collaborate with external experts to make sure our models are safe. These commitments are an important step forward in encouraging responsible AI development and deployment.”

Another voluntary framework

This mirrors the “voluntary commitments” made at the White House in July last year by Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI to encourage AI technology’s safe, secure, and transparent development. 

These new rules state that the 16 companies would “provide public transparency” on their safety implementations, except where doing so might increase risks or divulge sensitive commercial information disproportionately to societal benefits.

UK Prime Minister Rishi Sunak said, “It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety.” 

It’s a world first because firms beyond Europe and North America, such as China’s, joined it. 

However, voluntary commitments to AI Safety have been in vogue for a while.

There’s little risk for AI companies to agree to them, as there’s no means to enforce them. That also indicates how blunt an instrument they are when push comes to shove. 

Dan Hendrycks, the safety adviser to Elon Musk’s startup xAI, noted that the voluntary commitments would help “lay the foundation for concrete domestic regulation.”

A fair comment, but by its own admission, we’re yet to ‘lay the foundations’ when extreme risks are knocking at the door, according to some leading researchers. 

Not everyone agrees on how dangerous AI really is, but the point remains that the sentiment behind these frameworks isn’t yet aligning with actions. 

Nations form AI safety network

As this smaller AI safety summit gets underway in Seoul, South Korea, ten nations and the European Union (EU) agreed to establish an international network of publicly backed “AI Safety Institutes.”

The “Seoul Statement of Intent toward International Cooperation on AI Safety Science” agreement involves countries including the UK, the United States, Australia, Canada, France, Germany, Italy, Japan, South Korea, Singapore, and the EU. 

Notably absent from the agreement was China. However, the Chinese government participated, and the Chinese firm signed up to the framework described above. 

China has previously expressed a willingness to cooperate on AI safety and has been in ‘secret’ talks with the US.

This smaller interim summit came with less fanfare than the first, held in the UK’s Bletchley Park last November. 

However, several well-known tech figures joined, including Elon Musk, former Google CEO Eric Schmidt, and DeepMind founder Sir Demis Hassabis.

More commitments and discussions will come to light over the coming days.

Join The Future


Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.


Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions