GitHub, Hugging Face, Creative Commons, and several other tech organizations have issued an open letter to EU policymakers, urging them to re-evaluate some elements of the AI Act.
The letter says the proposed regulations risk impeding the development of open-source AI models and poses 5 recommendations to mitigate that.
One of the primary points of contention is the treatment of some open-source projects as if they were commercial or deployed AI systems. This may present difficulties for individual developers and non-profit researchers.
The AI Act received an overwhelming majority of 499 votes in favor, 28 against, and 93 abstentions in the European Parliament in June.
However, it will only be enacted once it attains a consensus from the EU Council, representing the 27 member states.
Here’s a summary of the letter’s 5 recommendations:
- Definition of “AI components”: The letter recommends that the EU AI Act include a precise definition of “AI components,” a term commonly used in the Act, emphasizing the need to delineate what these are.
- Clarification on open-source development: The collaborative development of open-source AI, such as making the AI components freely available in public repositories, should be clarified as not subject to the Act. This seeks to distinguish open-source from commercial activities.
- Support for an “AI Office”: GitHub and others support the EU’s intention to establish an AI Office.
- Effective exceptions to the Act: The letter advocates for exceptions on activities relating to real-world testing of AI systems to understand their performance and identify weaknesses.
- Proportional requirements for “foundation models”: Finally, the letter calls for a nuanced approach to differentiating between different models. It argues that the Act should account for the unique characteristics of open-source models.
GitHub and other signatories hope the Act will balance risk mitigation and foster AI innovation, but their recommendations would help support the open-source community, which plays a significant role in the industry.
Regulation could hit open-source AI disproportionately
Big tech companies like Microsoft, Google, and Meta are adept at managing compliance and regulations.
That’s not to say they don’t occasionally make severe errors, but they have the muscle and resources to comply with AI regulations.
In fact, for the most part, big tech has been highly encouraging of regulation and proactive about self-regulation. Google, Anthropic, OpenAI, and Microsoft formed the Frontier Model Forum this week, initiating collaboration between competitors.
The open-source community believes regulation will hit them harder, potentially even criminalizing forms of public AI R&D.
This benefits big tech, as open-source AI is perhaps their greatest competitor. That would certainly explain why they’re so willing to work together.