The EU AI Act passed in a landslide and will come into force this year

March 14, 2024
EU AI Act

The European Parliament has approved the world’s first comprehensive AI legislation, sparking both excitement and concern. 

523 votes were cast in favor of the law, 46 against, and 49 abstentions. It will most likely enter into force this May.

The AI Act introduces a novel, risk-based approach to AI governance. It categorizes AI systems based on their potential threats and regulates them accordingly. 

EU law-making is notoriously complex, and this is the culmination of several years of effort and some hairy moments where some countries grew hesitant about the Act’s impact on their economies and competition.

Its trajectory took another hit in June last year when some 150 major European companies warned against pursuing restrictive regulations. 

The wait is finally over. From everyday tools like spam filters to more complex systems used in healthcare and law enforcement, the AI Act is highly comprehensive. 

Among its most notable rules, the Act outright bans AI systems capable of cognitive behavioral manipulation, social scoring, and unauthorized biometric identification. 

Then there’s the “high-risk” category, which includes AI in critical infrastructure, educational tools, employment management, and more. These systems will undergo rigorous assessments pre and post-market launch. The public has the power to raise a flag on these AI systems to designated authorities.

Generative AI, like OpenAI’s ChatGPT, gets a special nod in the Act. While not labeled as high-risk, these platforms are expected to be transparent about their workings and the data they train on, aligning with EU copyright laws.

Here’s a short summary of the Act’s key rules:

  • Banned AI systems: Involve cognitive behavioral manipulation, social scoring, unauthorized biometric identification, and real-time/remote facial recognition.
  • High-risk AI systems: Related to critical infrastructures, educational/vocational training, product safety components, employment and worker management, essential private and public services, law enforcement, migration/asylum/border control, and the administration of justice/democratic processes.
  • Assessment and complaints: High-risk AI systems will undergo assessments before market launch and throughout their lifecycle. Individuals have the right to file complaints with national authorities.
  • Generative AI: Systems like ChatGPT must meet transparency requirements and EU copyright law, including disclosing AI-generated content, preventing illegal content generation, and summarizing copyrighted data used for training.
  • Implementation timeline: The AI Act is set to become law by mid-2024, with provisions rolling out in stages over two years. Banned systems must be phased out within six months, rules for general-purpose AI apply after one year, and full enforcement begins two years after the Act becomes law.
  • Fines: Non-compliance can result in fines of up to 35 million Euros or 7% of worldwide annual turnover.

In addition to regulating AI training and deployment, one of the most hotly anticipated aspects of the Act were its copyright rules. These include some of the following:

  • AI models designed for a wide range of uses must disclose summaries of the training data they utilize.
  • The disclosures should be sufficiently detailed to allow creators to identify if their content was used in training.
  • This requirement also applies to open-sourced models.
  • Any modifications to these models, such as fine-tuning, must also include information on the training data employed.
  • These rules apply to any model offered in the EU market, regardless of where it was developed.
  • Small and medium-sized enterprises (SMEs) will face more flexible enforcement but must still comply with copyright laws.
  • The existing provision for creators to exclude their work from being used in AI training remains in place.

This seems like a step forward in protecting people’s data from being used for AI model training without their permission.

Set to formally become law by mid-2024, the AI Act’s provisions will gradually come into effect.

The EU expects banned AI practices or projects to be terminated within six months. A year later, general-purpose AI systems must comply with new rules, and within two years, the law comes into force in its entirety. 

While the Act has both supporters and critics, it’s a landmark event for the tech industry and challenges other regions to accelerate their AI governance strategies.

MEP Dragos Tudorache explained how it signals a new era for AI technology: “The AI act is not the end of the journey but the starting point for new governance built around technology,” highlighting the pioneering spirit of this legislation.

As businesses, tech giants, and governments worldwide watch closely, it’s evident that the ripple effects of this legislation will be felt far beyond European borders. We’ll understand the true impact soon. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions