Global AI regulation has become a geopolitical issue, where will it lead?

November 8, 2023
AI regulation

The UK AI Safety Summit, combined with Biden’s executive order, has forced AI regulation into the spotlight, but the bigger picture remains hazy. 

The summit brought together a diverse group of stakeholders, demonstrating a collective commitment to shaping the future of AI. 

Reception to the event across the political spectrum of British media was generally positive, with publications typically oppositional to Sunak’s gung-ho approach, including the Guardian, heralding it as an overall success. 

While there’s a lingering sense of AI-related policy events to date forming little more than promises, dismissing them entirely might be overly reductionist. 

The Bletchley Declaration was one of the summit’s headline outputs, endorsed by 28 countries, including the US, UK, China, and the EU, underscoring international consensus on AI oversight.

Two days before the summit, Biden’s executive order outlined the US strategy to manage AI risks, showcasing the country’s national response to what is certainly a global challenge. 

The order’s timing illustrated an attempt to assert leadership and set standards in the rapidly advancing field of AI.

Together, these events have certainly laid down the “why?” of regulation – to curb risks, emphasize benefits, and protect vulnerable groups. 

We’ve been left with the “how?” with the discourse surrounding the nature and execution of regulation remaining contested.

Major powers are now jostling for regulatory leadership, which UK Prime Minister Rishi Sunak was intent on leading when he announced the summit.

That was somewhat eclipsed by the executive order, where Vice President Kamala Harris said quite plainly, “We intend that the actions we are taking domestically will serve as a model for international action.” 

Gina Raimondo, the US commerce secretary, further captured the dual spirit of competition and collaboration in her statement at the Summit, stating, “Even as nations compete vigorously, we can and must search for global solutions to global problems.”

Speaking of the ethos behind the recent executive order, Ben Buchanan, the White House’s AI adviser, said, “Leadership for the United States in AI is not just about inventing the technology.”

“It’s about crafting and co-developing the governance mechanisms, the safety protocols, the standards, and international institutions that will shape this technology’s impact.”

It seems that, for the US, AI regulation is a geopolitically competitive topic, especially when combined with the country’s subjugation of high-end AI exports to Russia, the Middle East, and China.

A little less talk and a little more action?

The jury is out on whether these events will expedite legislation and whether that legislation will be effective. Without laws in place, AI developers can continue to promote voluntary frameworks without being bound to them.

Even with laws in place, AI moves quickly, and those who truly understand the technology and its impacts are few and far between, and their opinions divided. 

The ‘AI godfathers’ of Geoffrey Hinton, Yoshio Bengio, and Yann LeCun cannot even agree on AI risks, their proportions, and how to tackle them.

Charlotte Walker-Osborn, technology partner at the law firm Morrison Foerster, stated that the Bletchley Declaration will “likely further drive some level of international legislative and governmental consensus around key tenets for regulating AI.” 

‘Some level’ is revealing terminology. As Walker-Osborn points out, “a truly uniform approach is unlikely” due to varying approaches to regulation and governance between countries. Achieving consensus is one thing, but implementing it across disparate legal and regulatory frameworks is quite another.

Furthermore, the absence of binding requirements, as conceded by Rishi Sunak, and the reliance on voluntary testing agreements among governments and major AI firms further point to limitations. 

Without enforceable regulations, declarations may lack the teeth needed to drive concrete change – the same as Biden’s executive order. 

We may have entered a jolting period of symbolic regulatory one-upmanship, with concrete legislation still largely in the pipeline outside of China. 

According to Deb Raji, a fellow at the Mozilla Foundation, the summit revealed varying perspectives.

“I think there’s pretty divergent views across various countries around what exactly to do,” said Raji, demonstrating that even among those who agree on the principle of regulation, the specifics remain contentious. 

Others had previously said that Congress is so deeply divided on some aspects of AI that legislation is likely a long way off.

Anu Bradford, a law professor at Columbia University, said, “The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future.”

Similarly, Margaret Mitchell, a researcher and chief ethics scientist at Hugging Face, stated, “Governments will seek to protect their national interests, and many of them will seek to establish themselves as leaders.”

Reliability of voluntary frameworks

Relying on voluntary frameworks in any form is not historically reliable.

From the failure of the League of Nations and Munch Agreement in the 1930s to the Kyoto Protocol, Paris Agreement, UN Guiding Principles (UNGP), and in the corporate world, the Enron scandal, past attempts at multilateral voluntary policy don’t inspire confidence.

Global AI policymaking risks following in historical footsteps, with promises breaking upon the rocks of realpolitik. For AI policy, an imbalance in representation and influence has already been exposed. Mike Katell, an ethics fellow at the Alan Turing Institute, pointed out regional disparities, stating, “There are big gaps in the Global South. There’s very little happening in Africa.” 

Moreover, regulation requires rigorous, robust legal processes to hold extremely powerful companies, like Microsoft and Google, to account. 

The US, UK, EU, and China can afford to create the types of legislative frameworks required to at least attempt to hold tech companies to account over AI, but the same can’t be said of most developing countries. 

This concentrates legal protection in wealthier countries, leaving others vulnerable to exploitation, both in terms of labor for data labeling services, which is fundamental to AI development, and in terms of their data, which AI companies could readily harvest due to a lack of digital rights.

Regional priorities differ

AI regulation is not merely a domestic issue but a strategic piece on the international chessboard. 

The US, for instance, has shown its hand with executive orders that seek to safeguard AI innovation while ensuring it remains aligned with democratic values and norms. 

Similarly, the EU has proactively proposed the AI Act, which aimed to set early global standards for AI development and use. The EU was arguably too early, however, risking its legislation becoming outdated or poorly defined for the current AI industry, also showing how ‘watching and waiting’ is a strategic play as much as a practical one. 

Thus far, unifying the EU bloc on the finer nuances of AI regulation, such as what limits are set and for whom, and how law enforcement should act on non-compliance, has been challenging. While the law will likely be ratified soon, its impacts on current AI R&D will tell how effective the act is in enforcing compliance. 

Meanwhile, others are implying they’ll form their own rules, with countries like Canada and Japan hinting at their own forthcoming AI policy initiatives. 

In addition, leading AI powers are acutely aware that establishing regulatory frameworks can provide them with a competitive edge. The regulations they propose not only set the standards for ethical AI usage but also define the field of play for economic competition. 

The landscape of AI governance is set to become a mosaic of varied approaches and philosophies.

“AI Cold War” debates intensify

There is another aspect to the US’ aggressive stance on becoming a Western model for AI development – it strengthens its position against China. 

Reflecting a rivalry that is predominantly technological rather than nuclear or ideological, competition between the US and China has been termed the “AI Cold War” by the media, or perhaps more innocuously, the “AI Race.”

Utilizing AI for military purposes is central to the US narrative on restricting trade with China, with semiconductor technology emerging as a crucial battleground due to its fundamental importance to AI industry competitiveness.

The narrative surrounding the AI Cold War took root following China’s announcement of its ambition to become the global AI leader by 2030. This assertion sparked concern and calls for the US to maintain technological supremacy, not just for its sake but for democratic values at large, given the potential for AI to reinforce authoritarian regimes, as observed by some in China’s use of technology in state surveillance.

High-profile figures such as former Google CEO Eric Schmidt and political scientist Graham T. Allison subsequently raised alarms over China’s rapid advancement in AI, suggesting that the US may be lagging in crucial areas.

Moreover, the potential for an unethical use of AI, primarily associated with China, presents an ideological chasm reminiscent of the first Cold War. Ethical considerations in AI deployment have thus become a pivotal narrative element in discussions about this emerging cold war.

Politico later suggested that an alliance of democratic nations may be necessary to counter China’s ascendancy in AI.

The semiconductor industry is particularly contentious, with Taiwan playing a critical role in geographical tensions. The Taiwan Semiconductor Manufacturing Company (TSMC) is at the center, and the majority of the world’s semiconductors are produced or pass through Taiwan – a country whose sovereignty is not recognized by China. Indeed, most of Nvidia’s chips are also manufactured in Taiwan.

Tensions have also spilled over into trade restrictions, as seen when US and European officials have cited the “AI Cold War” as justification for banning Huawei’s 5G technology in public procurement processes over surveillance concerns. 

Additionally, both the Trump and Biden administrations have imposed limitations on Dutch company ASML, preventing the export of advanced semiconductor manufacturing equipment to China, again citing national security risks.

On the industrial policy front, the US passed the Innovation and Competition Act and later the CHIPS and Science Act, which funnels billions into technology and manufacturing to counteract the perceived Chinese threat. The EU has mirrored this approach with its European Chips Act, seeking to bolster its semiconductor manufacturing capabilities.

AI regulation is perhaps entering a new phase of more intense geopolitical debate.

Parallel to this, some even doubt whether the technology poses large-scale risks, whereas others are certain of it. The confusion on all sides is palpable.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions