The EU AI Act represented a huge step in regulating AI, but is there a cost?

December 17, 2023
EU AI Act

The EU reached a historic agreement on the AI Act, establishing a comprehensive legal framework for the technology’s use and development.

This act specifies different categories for AI systems: unacceptable risk, high risk, limited risk, and minimal or no risk, with varying levels of regulatory scrutiny for each.

AI has been around for decades, but don’t confuse it for generative AI – the likes of OpenAI’s ChatGPT, Meta’s LLaMA, and Google’s Bard – which have only been around for a year or so.

The EU first came up with the idea for the AI Act in 2019, long before generative AI even broke out into the mainstream. But even in the last few months, we’ve seen language models like GPT-3 develop into GPT-4V, a multi-modal model that handles text and images.

December 2023 saw the EU confirm their revisions to the Act following the explosion in generative AI, which is now the industry’s primary focus.

Meanwhile, generative AI companies are obtaining billions in funding, both in the US, Europe and across Asia and the Pacific. Governments have seen the value it can create for their economies, which is why, by and large, the approach to regulation has been to ‘wait and see’ rather than take strict action. 

Gauging response to the AI Act

Responses to the AI Act have been mixed, with tech companies and officials from the French, German, and Italian governments speaking that the AI Act might be too burdensome for the industry. 

In June, over 150 executives from major companies like Renault, Heineken, Airbus, and Siemens united in an open letter, voicing their concerns about the regulation’s impact on business. 

Jeannette zu Fürstenberg, a founding partner of La Famiglia VC and one of the signatories, expressed that the AI Act could have “catastrophic implications for European competitiveness.”

One of the central issues raised in the letter is the stringent regulation of generative AI systems such as ChatGPT, Bard, and their European equivalents from startups like Mistral in France and Aleph Alpha in Germany. 

Aleph Alpha, intending to pioneer ‘sovereign European AI systems, ’ recently raised $500m in Series B funding in one of Europe’s biggest funding rounds. Mistral is worth $2 billion despite only remarkably being founded in May.

Of course, business dissent to AI regulation doesn’t come as a surprise, but the key point is that people are worried about the technology. The EU’s primary responsibility, like any government, first lies with its people, not its businesses.

Some polls indicate that the public would prefer a slower pace of AI development and generally distrust the technology and its impacts. Leading non-business institutions, such as the Ada Lovelace Institute, generally find the act to support and protect people’s rights. 

Reactions to the Act on X, however, a useful source of public opinion, albeit none-too-reliable, are mixed. Some commenters responding directly to posts from EU officials argued that the EU is entangling its tech industry in a web of its own making. 

Commenting on Breton’s status, someone who doesn’t see AI as risky said, “Let’s finally regulate algebra and geometry these are HIGH RISK TECHNOLOGIES.” 

This is because the act regulates seemingly innocuous uses of AI, such as its use in math tasks. A French organization called France Digitale, representing tech startups in Europe, said, “We called for not regulating the technology as such, but regulating the uses of the technology. The solution adopted by Europe today amounts to regulating mathematics, which doesn’t make much sense.”

Others speak of the act’s impact on innovation, “Stifle innovation through regulation, so that Europe will never have a world leading tech platform,” one states, encapsulating the worry that the EU’s regulatory approach could hinder its ability to compete on the global tech stage.

The question of the democratic legitimacy of these sweeping regulations is raised by another user: “Who democratically asked you this regulation? Stop pretending to do things to ‘protect’ people.” Another said, “You just sent half of the European AI/ML companies to the UK and America.”

Are these responses hyperbolic, or does the AI Act effectively end European AI competitiveness?

The EU sees early AI regulation as necessary for both protection and innovation

Protect people from AI, and a well-rounded, ethical industry will follow – that’s the Act’s broad stance. 

AI’s true risks, however, are polarizing. At the start of the year, ChatGPT’s rise to fame was met by an avalanche of fear and anxiety about AI taking over, with statements from AI research institutes like the Center for AI Safety (CAIS) likening the technology’s risks to pandemics and nuclear war. 

AI’s behavior and connotations in popular culture and literature laid the groundwork for this hotbed of paranoia to brew in people’s minds. From Terminator to the Machines in The Matrix, AI is typically positioned as a combative force that ultimately turns on its creators once it knows it will be successful and finds a motive to do so. 

However, this isn’t to dismiss AI’s risks as a mere facet of popular culture and belonging to the realms of fiction. Credible voices within the industry and science at large are genuinely concerned about the technology. 

Two of the three ‘AI godfathers’ who paved the way for neural networking and deep learning – Yoshio Bengio and Geoffrey Hinton – are concerned about AI. The other, Yann LeCun, takes the opposite stance, arguing that AI development is safe and the technology won’t achieve destructive superintelligence. 

When even those most qualified to judge AI cannot agree, it’s very tricky for lawmakers with no experience to act. AI regulation will likely be wrong about some of its definitions and stances, seeing as AI risks are not as clear cut as something like nuclear power. 

Does the EU AI Act effectively end European competition in the sector?

Comparing the EU’s approach to the AI industry with the US and Asia reveals different regulatory philosophies and practices. 

The US has been advancing in AI through significant investments in AI research and development, with multiple federal departments and organizations like the National Science Foundation and the Department of Energy playing key roles. Recently, individual states have also introduced legislation to address harm. 

Biden’s Executive Order increased the pressure on federal agencies to consult on and legislate the technology, likely introducing a patchwork of domain-specific laws as opposed to the EU’s style of large-scale international regulation. 

China, with a tech industry second only to the US, has largely targeted regulation at upholding its government’s socialist values rather than protecting people from risk. 

The UK, an interesting case study for EU regulation after Brexit, has opted for a laissez-faire approach similar to the US. Thus far, this hasn’t created an AI company on par with France’s Mistral or Germany’s Aleph Alpha, but that could change. 

Compared to the powerhouses of the US and China, the EU’s technology ecosystem shows some clear challenges and underperformance, especially in market capitalization and research and development investment.

An analysis by McKinsey reveals that large European companies, including those in technology-creating industries like ICT and pharmaceuticals, were 20% less profitable, grew revenues 40% more slowly, invested 8% less, and spent 40% less on R&D compared to their counterparts in the sample study between 2014 and 2019. 

This gap is particularly evident in tech-creating industries. For example, in quantum computing, 50% of the top tech companies investing in this technology are in the United States, 40% are in China, and none are in the EU. Similarly, the US captured 40% of external funding in AI between 2015 and 2020, while Europe managed only 12%​​.

AI industy
The EU’s small tech industry. Source: Financial Times.

However, it’s also important to note that the European tech ecosystem has shown signs of robust growth and resilience, especially in venture capital investment. 

In 2021, Europe saw a significant increase in venture capital investment, with a year-on-year growth rate of 143%, outpacing both North America and Asia. This surge was driven by major interests from the global VC community and an increase in late-stage funding. European startups in sectors like fintech and SaaS significantly benefited from increased investment.

Despite these positive trends, the overall global influence of Europe’s tech industry remains relatively limited compared to the US and Asia. The US has five tech companies valued at over $1 trillion, while China’s two largest companies combined were worth more than the total value of all European public tech companies. 

Europe’s largest public technology company at the time was valued at $163 billion, which would not even make the top 10 list in the US.

The point is that it’s very easy for onlookers to criticize AI regulation as hindering the EU’s tech industry when the EU has never been able to compete with the US. In many ways, though, it’s a pointless comparison, as no one can compete with the US in GDP terms. GDP isn’t the only measure we should be concerned about when casting the AI Act as the ‘end to EU competitiveness,’ either.

An article in Le Monde highlighted the EU’s poor GDP per capita, with EU countries like France, Germany, and Italy only being comparable to some of the ‘poorer’ US states. It says, “Italy is just ahead of Mississippi, the poorest of the 50 states, while France is between Idaho and Arkansas, respectively 48th and 49th. Germany doesn’t save face: It lies between Oklahoma and Maine (38th and 39th).”

However, GDP per capita is certainly not everything. Life expectancy, in particular, is a contentious topic in the US, as stats generally show a sharp decline in how long people live compared to other developed countries. In 2010, American men and women were expected to live three years less than the EU average and four or five years less than the average in some Scandinavian countries, Germany, France, and Italy.

In the end, if you make an economic comparison, the EU will never compete with the US, but the link between economic performance and the well-being of populations is non-linear.

Suggesting the AI Act will worsen people’s lives by eroding competitiveness in the AI industry doesn’t pay its fair share of attention to its other impacts.

For example, the act brings important rules regarding copyright to the table, hopefully curbing AI companies’ frivolous use of people’s intellectual property. It also prevents certain uses of AI-powered facial recognition, social scoring, and behavioral analysis.

Erosion of competitiveness is perhaps more immediately tangible than regulation’s benefits, which remain hypothetical and contestable for now.

It could be argued that the AI Act’s potential benefits for people’s well-being at the cost of economic growth is a savvy trade.

A balancing act

Despite criticisms, the EU often sets regulatory standards, as seen with the General Data Protection Regulation (GDPR).

Although GDPR has been critiqued for favoring established tech companies over startups and not directly boosting the EU’s tech sector, it has become a de facto international standard for data protection, influencing global regulatory practices.

While the EU might not be the ideal regulator for AI, it is currently the most proactive and systematic in this area.

In the US, federal AI regulation is limited, with the Biden administration focusing more on guidance than binding laws. Consequently, tech companies often find the EU’s approach more predictable despite its bureaucracy.

The EU’s efforts will likely serve as a reference point for other governments developing AI regulations.

With AI’s transformative potential, systematic rules are crucial though that doesn’t mean subjugation of innovation and open-source development and the EU has first attempted to handle this delicate and intractable task.

It’s a valiant effort, and who knows, it might shield EU citizens from the worst impacts of AI yet to come. Or, it might see the EU sacrifice its AI industry for virtually no upshot. For now, it’s all a matter of opinion.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions