Key US AI firms and Chinese AI experts have been conducting confidential discussions focused on the safety of AI.
These discussions, which have taken place in Geneva, Switzerland, demonstrate a rare instance of cooperation between the two global powers, particularly given geopolitical tensions escalating over AI development.
The companies OpenAI, Anthropic, and Cohere, representing the US, have attended alongside Chinese delegates from Tsinghua University and other Chinese state-backed entities, according to the Financial Times.
These gatherings, held in July and October 2023, haven’t been publicly reported until now.
Insiders suggested that major Chinese AI companies such as ByteDance, Tencent, and Baidu were absent from the talks, while Google DeepMind was briefed but didn’t participate.
The discussions explored technical cooperation and policy proposals, potentially feeding into international forums like the UN Security Council meeting on AI and the UK’s AI summit, to which China was invited.
Dialogues between the powers allegedly address the potential hazards posed by AI, such as the spread of misinformation and societal disruption, and promote investment in research dedicated to AI safety.
An unnamed participant in the talks described, “There is no way for us to set international standards around AI safety and alignment without agreement between this set of actors. And if they agree, it makes it much easier to bring the others along.”
China and the US are butting heads on the geopolitical stage, with the US ramping up trade restrictions in order to restrict their procurement of high-end AI hardware. The aim is to keep China generations behind the US.
While China hasn’t joined all multilateral initiatives for AI safety, they have expressed collaborative sentiment, with the Chinese embassy in the UK stating, “China supports efforts to discuss AI governance and develop needful frameworks, norms, and standards based on broad consensus.”
The Financial Times stated the talks were brokered by the Shaikh Group, a private mediation organization with experience in conflict diplomacy negotiation.
The success of these meetings has allegedly assisted future dialogues focused on aligning AI systems with legal codes and societal norms and values.
AI regulation will certainly need to be a coordinated effort, but we ultimately know little of the outcome of these talks.