High-level representatives from the United States and China have convened in Geneva, Switzerland, to address AI’s complex challenges and potential risks.
This series of discussions, agreed upon by President Joe Biden and President Xi Jinping, have taken place in relative secrecy.
Tarun Chhabra from the White House National Security Council and Dr. Seth Center, the acting special envoy for critical and emerging technology at the State Department, will engage with their Chinese counterparts from the Foreign Ministry and the National Development and Reform Commission.
The agenda covers a broad spectrum of AI-related issues centered around safety.
In addition to serving a practical purpose in preventing AI harm from spiraling out of control, it’s also symbolic that AI safety transcends US-Chinese competition, at least on some level.
An unnamed US official acknowledged the divergent perspectives between the two countries, stating, “While we may not see eye-to-eye with China on numerous AI-related topics and applications, we firmly believe that open communication on critical AI risks is essential for ensuring a safer world.”
This particular session will reportedly explore AI in military contexts but won’t touch on the ongoing US sanctions against China.
The talks are instead designed to be pragmatic, establishing common ground on AI safety rather than focusing on geopolitical relations.
The Geneva talks will also contribute to the growing international momentum to develop a framework for AI governance.
OpenAI CEO Sam Altman recently spoke about establishing a living, breathing agency to audit and control AI risks, similar to the International Atomic Energy Agency (IAEA).
His rationale is compelling: ” I have pushed for an agency-based approach for kind of like the big picture stuff and not…write it in laws,… in 12 months, it will all be written wrong.”
So why hasn’t that happened, then?
Right now, obtaining complete international collaboration on AI safety feels like a bridge too far. Even if China and the US agreed, Russia would remain firmly on the periphery.
Despite obstacles, the US and China have shown some willingness to collaborate on AI issues at the international level.
In March, the US introduced a non-binding resolution at the UN advocating for “safe, secure, and trustworthy” AI, which secured China’s support.
There is another reason for China to accelerate these discussions, however. It seems certain that US AI weaponry, including autonomous fighter jets, demoed recently, is way ahead of China. Securing assurances from the US about how such tech will be used wouldn’t go amiss.
Jason Glassberg, a co-founder of Casaba Security and a leading expert on emerging AI threats, suggests that the talks will primarily serve as a foundation for future dialogue and may not yield immediate, tangible outcomes.
Glassberg emphasizes the importance of both nations recognizing the potential dangers of AI misuse, stating, “What’s most important right now is that both sides realize they each have a lot to lose if AI becomes weaponized or abused.”
He added, “All parties involved are equally at risk. Right now, one of the biggest areas of risk is with deepfakes, particularly for use in disinformation campaigns.”
With the US and China’s ongoing dialogue seemingly progressing, it might not be long before we see the establishment of an international agency for AI safety.