Notable absences hit the AI Safety Summit taking place in May

April 29, 2024
  • The second AI Safety Summit is set to take place on May 21-22 in Seoul, S. Korea
  • It's a virtual 6-month follow-up to the inaugural Bletchley Park event
  • Early signs show a reduced attendance from both politicians and companies
AI safety

Last year, the first Global AI Safety Summit was held at the UK’s historic Bletchley Park, captivating global attention. 

However, as the second 6-month follow-up summit approaches, scheduled for May 21-22 and hosted mostly virtually by the UK and South Korea, indications suggest a reality check might be incoming. 

The summit’s organizers hope to continue the momentum generated at Bletchley Park, where over 25 government representatives signed a joint declaration committing to collaborative oversight of AI. 

While no one expected this smaller interim event to meet or exceed the grandeur of the first summit, key attendees, including DeepMind and Mozilla, are opting out of the gathering. 

While the EU has not ruled out its presence at the event, a spokesperson confirmed that its top tech regulators, including Margrethe Vestager, Thierry Breton, and Vera Jourova, won’t attend.

The United States Department of State has confirmed that it will send representatives to the gathering in Seoul but hasn’t specified who.

Meanwhile, the Canadian, Brazilian, and Dutch governments have announced they won’t participate in the event.

The French government has also allegedly postponed the larger-scale annual Safety Summit until 2025, but this remains unconfirmed.

AI’s challenges have become more confusing

Stating that we should protect humanity from global and extinction-level events is quite simple, given that the technology remains in its nascent stages and the real risk remains low.

On the other hand, meaningful action on deep fakes, environmental damage, and copyright requires real work that goes beyond pageantry and rhetoric.

While we’ve witnessed the emergence of a patchwork of laws and regulations designed to control AI, most notably the EU AI Act, many key issues remain unresolved. 

As Francine Bennett, interim director of the Ada Lovelace Institute, explained to Reuters, “The policy discourse around AI has expanded to include other important concerns, such as market concentration and environmental impacts.” 

AI safety’s broader scope demands extensive and highly subjective deliberation, which perhaps won’t come naturally in this virtual setting.

Another factor is that geopolitical tensions between Western powers and China continue to be a thorn in the side of negotiations. 

While the US and China have been discussing AI security in private meetings, other major events like the World Economic Forum witnessed frosty interactions between the two powers, including a walkout by the US delegation at a Chinese talk. 

This 6-month virtual safety summit will likely reflect on the moderate progress made so far, but strong practical action on key issues remains forthcoming.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions