On day two of the AI Safety Summit, UK Prime Minister Rishi Sunak attempted to drive home that AI developers will permit government evaluation of tools prior to their market launch.
Sunak said the summit’s outcomes “will tip the balance in favor of humanity” and revealed that industry leaders, including Meta, Google Deep Mind, and OpenAI, have consented to pre-release testing of their AI innovations – something they’d already stated to do in a recent voluntary framework developed by the US.
The commitments made during day two of the Summit included establishing an expert body named the AI Safety Institute.
LIVE: Prime Minister Rishi Sunak’s statement from the AI Safety Summit at Bletchley Park https://t.co/YLn6O8FO7f
— UK Prime Minister (@10DowningStreet) November 2, 2023
Another headline of day two is the unveiling of a forthcoming “State of AI Science” report led by ‘AI godfather’ Yoshio Bengio.
“This idea is inspired by the way the Intergovernmental Panel on Climate Change was set up to reach international science consensus,” Sunak said.
The agreed-upon framework advocates for comprehensive safety assessments of AI models before and after they go live, emphasizing collaborative testing efforts involving governments, particularly in areas affecting national security and societal welfare.
The industry’s commitment to this cause was underscored by remarks from Demis Hassabis, CEO of Google DeepMind, who stated, “AI can help solve some of the most critical challenges of our time, from curing disease to addressing the climate crisis. But it will also present new challenges for the world and we must ensure the technology is built and deployed safely. Getting this right will take a collective effort from governments, industry and civil society to inform and develop robust safety tests and evaluations. I’m excited to see the UK launch the AI Safety Institute to accelerate progress on this vital work.”
China is not participating in these multilateral initiatives. However, this pact garners support from the EU and leading countries such as the US, UK, Japan, France, and Germany and is backed by tech behemoths like Google, Amazon, Microsoft, and Meta.
While the PM faced questions over the voluntary nature of the agreements and the absence of binding legislation, he maintained that AI necessitates swift action, suggesting that “binding requirements” for AI firms might be inevitable.
Together with yesterday’s Bletchley Park declaration, the Summit attempted to galvanize action around AI, though critics have largely dismissed it as symbolical rather than actionable.
Before going live with Sunak, Musk took to X to post a mocking cartoon of figures representing global powers discussing AI risks while simultaneously rubbing their hands together at the potential of dominance.
He’s nothing if not controversial, but you’d have to say it’s a witty quip for a summit that is inherently based on promises. It would be reductionist to suggest that means it achieves nothing, however, but the real substance behind talks remains hypothetical.
— Elon Musk (@elonmusk) November 2, 2023
Concluding the summit, the British government proudly issued the Bletchley declaration, signed by 28 governments, including the UK, US, and EU, promising a collaborative approach to AI safety standards, evocative of climate crisis agreements.
Overall, Rishi Sunak’s diplomatic efforts at the AI summit have been recognized as an achievement, establishing the UK as a pioneer in pursuing global AI safety and regulation, setting the stage for France’s hosting of the next summit in 2024.
After pushing for an international AI safety summit for 9 years, I was really moved to get to be part of it finally happening! @RishiSunak had more success than I’d expected, with striking unity between 1) US & China, 2) those focused on current harms vs existential risk & 3)… https://t.co/KAQ4HCIPQo
— Max Tegmark (@tegmark) November 2, 2023
Day two of the Summit
Day two of the Summit was topped off by a debate between Sunak and Musk. Here are some of the key events from the latest in the day (first) to the earlier (last).
Watch the 50-minute stream here.
— Elon Musk (@elonmusk) November 2, 2023
- Musk argued that a physical “off switch” could shut down AI in the case of catastrophic problems. “What if one day they get a software update, and suddenly they’re not so friendly after all?” Musk told Sunak.
- When Sunak asks why Musk recently changed Twitter’s content moderation system, Musk argues that all content moderators have biases. He asks, “How do we have a consensus-driven approach to truth?” and says his goal is to get to a “purer truth.” Musk claims his new moderation system simply provides more context and transparency, stating, “Everything is open sourced. You can see all of the data and can see if there has been any gaming of the system, suggest improvements… Truth pays.”
- Musk predicts that in the future, AI robots could become true friends with humans. He argues they will have detailed memories and knowledge from reading extensively, saying, “You could talk to it every day, you will actually have a great friend. That will actually be a real thing.
- Musk makes the bold prediction that AI will advance to the point that “no job is needed” for humans. He states, “You can have a job if you want to for personal satisfaction, AI can do everything.” Musk says this could be positive or negative, posing challenges for finding meaning and purpose. But he also argues it could provide “universal high income” and make the best tutors. Overall, he sees many benefits for education, productivity, and automation of dangerous jobs.
- When Sunak notes he faced criticism for inviting China, Musk praises the decision as “brave.” Musk argues that collaborating with China on AI safety is essential, saying their participation is a very positive sign.
- Musk tells Sunak he believes governments need to act as “referees” to ensure public safety with AI while still allowing innovation. He reiterates his view that, overall, AI will be “a force for good.” This is contrary to what Musk has said in the past, demonstrating his meandering views on AI.
- Ahead of their conversation, Musk expresses optimism about AI’s potential but warns it could pose risks, using the analogy of a “magic genie problem” where wishes often go wrong
- During his press conference, Sunak defended steps governments are taking to address AI safety risks, saying they are doing the “right and responsible thing” to protect the public, even if the risks are still uncertain.
- A new poll finds only 15% of people are confident in the UK government’s ability to effectively regulate AI. 29% express no confidence at all.
- When asked if AI could pose existential threats, Sunak says there is a plausible case it may bring risks on the scale of nuclear war or a pandemic. He argues leaders, therefore, have a duty to take protective steps.
- Science Secretary Donelan says the AI risk she is most worried about is a “Terminator scenario” where machines become uncontrollable. She sees this as a lower probability but the highest impact.
More analysis of the Summit will come in the following days. Overall, the impression given is of a symbolically momentous event that has enormous potential.
But potential is not readily converted into legislation, and, ultimately, action. Only time will tell on that front.