The UK’s AI Safety Summit, scheduled to take place on the 1st and 2nd of November, 2023, has unveiled the full program for day one.
With the aim to shape approaches to developing safe AI, the event promises an assembly of international governments, major AI corporations, and leading research experts.
Notably, representatives from China are expected to attend despite escalating geopolitical tensions with the US.
The UK government has released the full first-day schedule, with full details on the second day still forthcoming.
Day 1: Navigating frontier AI
Day one features tasks surrounding risks and safety surrounding ‘frontier models,’ a term that has come to define sophisticated foundational models like GPT-4.
The summit starts with an opening plenary featuring remarks from the UK’s Secretary of State of the Department for Science, Innovation, and Technology, followed by contributions from nations leading the charge in AI development and the UK’s Frontier AI Taskforce.
A series of breakout sessions will enable multi-disciplinary dialogue on various critical issues:
- Global safety risks from frontier AI misuse: Evaluating the safety concerns stemming from recent and upcoming frontier AI models, focusing on biosecurity and cybersecurity.
- Unpredictable advances in frontier AI capability: Discussing the implications of rapid leaps in AI capabilities, the advent of new forecasting methods, and the future trajectory of AI development, including open-source considerations.
- Potential loss of control over advanced AI: Contemplating scenarios where AI surpasses human oversight, the associated risks, and exploring tools for prevention and monitoring.
- Societal integration of frontier AI: Addressing challenges related to AI’s role in elections, bias, crime, online safety, and global inequality, alongside discussing existing countermeasures.
Improving frontier AI safety
Another series of roundtable discussions will focus on enhancing AI safety:
- Responsible scaling at AI developers: Debating the best practices for responsible capability scaling, risk assessment, governance, and accountability within frontier AI development.
- National policymakers’ role in AI: Discussing policy measures to manage AI risks, including monitoring, accountability, licensing, and managing open-source AI models.
- International collaboration on AI: Identifying areas where global cooperation is essential to mitigate risks and seize opportunities presented by frontier AI.
- Scientific community’s contribution to AI safety: Evaluating the current technical solutions for AI safety, urgent research areas, and emerging solutions.
The day concludes with a panel discussion on leveraging AI to revolutionize education for upcoming generations, followed by closing remarks from the UK’s Secretary of State.
Day 2: Shaping the future of AI
Prime Minister Rishi Sunak will lead a select assembly of government representatives, corporate leaders, and experts to intensify discussions on managing emerging AI technology risks.
In a parallel session, UK Technology Secretary Michelle Donelan will reconvene with international counterparts to solidify the course of action discussed during the summit.
Top tech bosses and politicians to attend the Summit
Top executives from Silicon Valley are set to attend the summit, including Sam Altman from OpenAI, Demis Hassabis from Google Deepmind, and Dario Amodei from Anthropic.
Other notable attendees expected at the summit include representatives from Elon Musk’s xAI and CEOs Alex Karp, Aidan Gomez, and Emad Mostaque from AI enterprises Palantir, Cohere, and Stability, respectively.
Furthermore, policy heads from tech giants Meta and Microsoft, Sir Nick Clegg and Brad Smith, and Google’s James Manyika are also set to represent their respective companies.
In preparation for the summit, participating companies have been encouraged to endorse a statement, which world leaders are expected to sign at the event.
Ongoing negotiations involving officials from the US, EU, G7 nations, and China have been taking place to finalize the wording of this statement. It’s anticipated to include a stern warning about the potential “catastrophic harm” AI could inflict if left unchecked.
Companies are also being requested to disclose their policies detailing commitments to “safe AI development and deployment,” along with their views on how AI can “address the most urgent and important problems for humanity.”
US Vice President Kamala Harris is expected to attend. German Chancellor Olaf Scholz has reportedly declined the invitation, but French President Emmanuel Macron and representatives from China are likely to participate.
While the summit is not expected to directly result in creating a new AI regulatory body or international laws, it’s aiming to be a crucial step towards future regulations.
A government spokesman emphasized the summit’s collaborative approach, stating, “The AI Safety Summit will bring together a wide array of attendees, including international governments, academia, industry, and civil society, as part of a collaborative approach to drive targeted, rapid international action on the safe and responsible development of AI.”