The European Union’s Artificial Intelligence (AI) Act officially entered into force on August 1, 2024 – a watershed moment for global AI regulation.
This sweeping legislation categorizes AI systems based on their risk levels, imposing different degrees of oversight that vary by risk category.
The Act will completely ban some “unacceptable risk” forms of AI, like those designed to manipulate people’s behavior.
While the Act is now law in all 27 EU member states, the vast majority of its provisions don’t take immediate effect.
Instead, this date marks the beginning of a preparation phase for both regulators and businesses.
Nevertheless, the wheels are in motion, and the Act is sure to shape the future of how AI technologies are developed, deployed, and managed, both in the EU and internationally.
The implementation timeline is as follows:
- February 2025: Prohibitions on “unacceptable risk” AI practices take effect. These include social scoring systems, untargeted facial image scraping, and the use of emotion recognition technology in workplaces and educational settings.
- August 2025: Requirements for general-purpose AI models come into force. This category, which includes large language models like GPT, will need to comply with rules on transparency, security, and risk mitigation.
- August 2026: Regulations for high-risk AI systems in critical sectors like healthcare, education, and employment become mandatory.
The European Commission is gearing up to enforce these new rules.
Commission spokesperson Thomas Regnier explained that some 60 existing staff will be redirected to the new AI Office, and 80 more external employees will be hired in the next year.
Additionally, each EU member state is required to establish national competent authorities to oversee and enforce the Act by August 2025.
Compliance will not happen overnight. While any large AI company will have been preparing for the Act for some time, experts estimate that implementing the controls and practices can take six months or more.
The stakes are high for businesses caught in the Act’s crosshairs. Companies that breach it could face fines of up to €35 million or 7% of their global annual revenues, whichever is higher.
That’s higher than GPDR, and the EU doesn’t tend to make idle threats, collecting over €4 billion from GDPR fines to date.
International impacts
As the world’s first comprehensive AI regulation, the EU AI Act will set new standards worldwide.
Major players like Microsoft, Google, Amazon, Apple, and Meta will be among the most heavily targeted by the new regulations.
As Charlie Thompson of Appian told CNBC, “The AI Act will likely apply to any organization with operations or impact in the EU, regardless of where they’re headquartered.”
Some US companies are taking preemptive action. Meta, for instance, has restricted the availability of its AI model LLaMa 400B in Europe, citing regulatory uncertainty. OpenAI threatened to throttle product releases in Europe in 2023 but quickly backed down.
To comply with the Act, AI companies might need to involve revising training datasets, implementing more robust human oversight, and supplying EU authorities with detailed documentation.
This is at odds with how the AI industry operates. OpenAI, Google, etc.’s proprietary AI models are secretive and highly guarded.
Training data is exceptionally valuable, and revealing it would likely expose vast quantities of copyrighted material.
There are tough questions to answer if AI development is to progress at the same pace as it has thus far.
Some businesses are under pressure to act sooner than others
The EU Commission estimates that some 85% of AI companies fall under “minimal risk,” requiring little oversight, but the Act’s rules nevertheless impinge on the activities of companies in its upper categories.
Human resources and employment is one area labeled part of the Act’s “high-risk” category.
Major enterprise software vendors like SAP, Oracle, IBM, Workday, and ServiceNow have all launched AI-enhanced HR applications that incorporate AI into screening and managing candidates.
Jesper Schleimann, SAP’s AI officer for EMEA, told The Register that the company has established robust processes to ensure compliance with the new rules.
Similarly, Workday has implemented a Responsible AI program led by senior executives to align with the Act’s requirements.
Another category under the cosh is AI systems used in critical infrastructure and essential public and private services.
This encompasses a broad range of applications, from AI used in energy grids and transportation systems to those employed in healthcare and financial services.
Companies operating in these sectors will need to demonstrate that their AI systems meet stringent safety and reliability standards. They’ll also be required to conduct thorough risk assessments, implement robust monitoring systems, and ensure their AI models are explainable and transparent.
While the AI Act bans certain uses of biometric identification and surveillance outright, it makes limited concessions in law enforcement and national security contexts.
This has proved a fertile area for AI development, with companies like Palantir building advanced predictive crime systems likely to contradict the act.
The UK has already experimented heavily with AI-powered surveillance. Although the UK is outside the EU, many AI companies based there will almost certainly have to comply with the Act.
Uncertainty lies ahead
The response to the Act has been mixed. Numerous companies across the EU’s tech industry have expressed concerns about its impact on innovation and competition.
In June, over 150 executives from major companies like Renault, Heineken, Airbus, and Siemens united in an open letter, voicing their concerns about the regulation’s impact on business.
Jeannette zu Fürstenberg, one of the signatories and founding partner of Berlin-based venture capital fund La Famiglia VC, expressed that the AI Act could have “catastrophic implications for European competitiveness.”
France Digitale, representing tech startups in Europe, criticized the Act’s rules and definitions, stating, “We called for not regulating the technology as such, but regulating the uses of the technology. The solution adopted by Europe today amounts to regulating mathematics, which doesn’t make much sense.”
However, backers argue the Act also presents opportunities for innovation in responsible AI development. The EU’s stance is clear: protect people from AI, and a more well-rounded, ethically-driven industry will follow.
Regnier told Euro News, “What you hear everywhere is that what the EU does is purely regulation (…) and that this will block innovation. This is not correct.”
“The legislation is not there to push companies back from launching their systems – it’s the opposite. We want them to operate in the EU but want to protect our citizens and protect our businesses.”
While skepticism looms large, there is cause for optimism. Setting boundaries on AI-powered facial recognition, social scoring, and behavioral analysis, is designed to protect EU citizens’ civil liberties, which have long taken precedence over technology in EU regulations.
Internationally, the Act may help build public trust in AI technologies, quell fears, and set clearer standards for AI development and use.
Building long-term trust in AI is vital to keeping the industry powering forward, so there could be some commercial upside to the Act, though it’ll take patience to see it to fruition.