The UK’s AI Safety Summit kickstarted today, seeing politicians, tech bosses, researchers, and members of civil advocacy groups descend upon the historic Bletchley Park.
The British government unveiled “The Bletchley Declaration,” which received backing from representatives spanning 28 different countries, including the US and China.
The declaration highlighted the severe risks associated with the most advanced “frontier” AI systems, stating unequivocally: “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”
The document also emphasized the global nature of AI risks and the necessity for international cooperation in addressing these challenges: “Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy, and responsible AI.”
Despite the strong rhetoric, the declaration stopped short of setting concrete policy goals, instead scheduling additional meetings in South Korea in six months and France in a year’s time to continue the conversation.
UK Prime Minister Rishi Sunak actively promoted the summit as a critical opportunity for global leaders, companies, researchers, and civil society groups to come together and lay the groundwork for international safety standards for AI.
In an address, King Charles said via video, “We are witnessing one of the greatest technological leaps in the history of human endeavor,” continuing, “There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure.”
Watch the King’s message of hope for AI as he addresses the UK’s first-ever AI safety summit 👇
More on the summit: https://t.co/pdSULcNyn1 pic.twitter.com/cB4Bzu0J3n
— Sky News (@SkyNews) November 1, 2023
Vice President Kamala Harris and US Secretary of Commerce Gina Raimondo attended from the US.
From China, Wu Zhaohui, the vice minister of science and technology, expressed willingness for China to collaborate with others, encouraging countries to “enhance dialogue and communication” and that the technology is “uncertain, unexplainable and lacks transparency.”
Rajeev Chandrasekhar, a minister of technology from India, spoke of AI deep fakes, “By allowing innovation to get ahead of regulation, we open ourselves to the toxicity and misinformation and weaponization that we see on the internet today, represented by social media.”
Elon Musk, who attended, said of AI to “hope for the best but prepare for the worst.”
Elon Musk at UK’s first ever AI safety summit 🇬🇧pic.twitter.com/j40UNs6Cl2
— Dima Zeniuk (@DimaZeniuk) November 1, 2023
Executives from Anthropic, DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI, and Tencent attended, as well as members of civil advocacy groups such as the Algorithmic Justice League.
Our CEO, Dario Amodei, shared the guiding principles behind Anthropic’s Responsible Scaling Policy at the UK AI Safety Summit at Bletchley Park this afternoon.
Read his remarks: https://t.co/WCczCCvWaj
— Anthropic (@AnthropicAI) November 1, 2023
While the summit was rich in symbolism and attended by influential figures, critics argued that it was heavier on pageantry than substance, noting the absence of key political leaders such as President Biden, President Emmanuel Macron of France, and Chancellor Olaf Scholz of Germany.
Simultaneously, nations around the globe are advancing their own laws and regulations to tackle AI, most recently including Biden’s executive order – though it admittedly isn’t legislation in itself.
AI Safety Summit key events
Here are some key events from AI Safety Summit, from the latest (first) to the earliest (last):
- “Some technology companies choose to prioritize profit over the well-being of their customers,” US Vice President Kamala Harris stated, emphasizing the necessity of collaborative efforts between governments, civil society, and the private sector to address the challenges posed by artificial intelligence. She acknowledged the US government’s voluntary commitments to leading AI companies to promote the safe development of the technology, asserting the administration’s readiness to take further steps if needed.
- The establishment of the US AI Safety Institute was highlighted by Harris, with its mission to create “rigorous standards to test the safety of AI models for public use.” Harris further expressed her desire for the US domestic plan on AI to inspire global policy, which is something she also mentioned at the signing of the recent US executive order.
- Harris reiterated the dual nature of AI, capable of both “profound good” and “profound harm.” She underscored the urgency for global action to manage the existential threats posed by AI, calling for a comprehensive approach to address all associated dangers.
- The “Bletchley Park Declaration” on AI risks was agreed upon by 28 nations, marking a significant step in the global conversation on AI safety. The declaration addresses the opportunities, risks, and the need for international action on frontier AI.
- The Republic of Korea was announced as the host for the second AI safety summit in six months, with France hosting the third summit a year from now.
- proper “guardrails” for AI development, highlighting the summit’s role in defining AI’s future trajectory.
- Michelle Donelan, the UK Science Secretary, expressed the profound responsibilities in AI development, noting the technology’s potential to either empower humanity or pose significant threats.
- Sir Nick Clegg, head of global affairs at Meta, called for global unity in AI development and regulation, stressing the importance of consistency across major jurisdictions.
- Clegg warned of the “danger” of over-speculating about AI’s future, emphasizing the need to address current technological challenges, particularly in the context of upcoming elections.
Day two of the event begins tomorrow, November 2nd, 2023.