US-EU discussions push for temporary AI ‘stop gap’ before regulations become binding

June 1, 2023

EU-US AI

The exponential growth of AI systems is outpacing research and regulation, leaving governments in an awkward position to balance advantages with risks. 

Laws take years to develop and become legally binding. AI evolves on a weekly basis. 

That’s the dichotomy facing AI leaders and politicians, with the first piece of meaningful AI regulation in the West, the EU AI Act, forthcoming in 2026. Even one year ago, ChatGPT was just a whisper.

Top US and EU officials met for US-EU Trade and Tech Council (TTC) on the 31st of May in Luleå, Sweden. Margrethe Vestager, Europe’s digital commissioner, who met with Google CEO Sundar Pichai the week before to discuss a potential ‘AI Pact,’ said, “Democracy needs to show we are as fast as the technology.”

Officials acknowledge the yawning gap between the pace of technology and the pace of lawmaking. Referring to generative AI like ChatGPT, Gina Raimondo, the U.S. commerce secretary, said, “It’s coming at a pace like no other technology.” 

So what did the TTC meeting achieve?

Primarily, attendees discussed non-binding or voluntary frameworks around risk and transparency, which will be presented to the G7 in the fall. 

The EU, which has a direct approach to digital legislation, opts for a tiered approach to AI regulation, where AIs are sorted into categories based on risk. 

This includes a banned “unacceptable risk” tier and a “high risk” tier, which tech bosses like OpenAI CEO Sam Altman fear will compromise their products’ functionality. 

AI Risk
EU AI Act risk levels: Source: EU.

The US isn’t proposing such definitive regulations, favoring voluntary rules.

Many more meetings between the EU, US, and big tech will be required to align views with meaningful practical action. 

Will voluntary AI rules work?

There are many examples of voluntary rules in other sectors and industries, such as voluntary data security frameworks and ESG disclosures, but none lie as close to the cutting edge as a voluntary AI governance framework. 

After all, we’re dealing with an extinction-level threat here, according to top tech leaders and academics who co-signed the Center for AI Safety’s statement on AI risk this week.

Big companies like OpenAI and Google already have central departments focused on governance and matters of internal compliance, so aligning their products and services with voluntary frameworks could be a matter of re-writing internal policy documents. 

Voluntary rules are better than nothing, but with numerous propositions on the table, politicians and AI leaders will have to choose something to run with sooner or later.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions