France, Germany, and Italy have reached an agreement on how AI should be regulated and it’s even stricter than the proposed EU AI Act.
The details of the agreement were contained in a joint paper that Reuters reportedly gained access to.
Germany’s Economy Minister Robert Habeck and Digital Affairs Minister Volker Wissing were tasked with representing Germany’s interests at the negotiations. Wissing told Reuters, “We need to regulate the applications and not the technology if we want to play in the top AI league worldwide.”
The focus on the application of AI rather than the underlying technology mirrors the sentiment of the EU AI Act that the European Parliament presented in June.
The joint paper further stated that “the inherent risks lie in the application of AI systems rather than in the technology itself.” Although it didn’t list foundational model technical specs, it did call for AI companies to provide a “model card” for their AI models.
The model card would contain information related to how the model functions, its capabilities, and its limitations.
An interesting departure from the EU AI Act is that the 3 countries agreed that the regulations should be binding on AI companies regardless of their size.
The EU AI Act called for regulations to only be binding on the larger AI companies like Meta or OpenAI.
In their discussions, France, Germany, and Italy agreed that the unintended consequence of giving smaller AI companies a free pass would be reduced trust and less adoption of models from smaller tech firms.
UK says no to AI regulation
The UK was at the forefront of the pursuit of AI safety when it led a number of countries to sign the Bletchley Declaration. That document outlines in broad terms how the signatories intend to pursue AI tech in a ‘safe and responsible manner’.
Jonathan Camrose, Britain’s Minister for AI and Intellectual Property, said that the UK would not be imposing any new laws to regulate the development of AI in the short term.
“I would never criticize any other nation’s act on this, but there is always a risk of premature regulation. Scrambling to regulate AI would limit the technology,” Camrose said.
As the EU moves to tighten the screws on AI development, the UK is heading in the opposite direction to present itself as an attractive environment for AI tech companies to operate.
Commenting on other nation states rush to regulate AI Camrose said, “You are not actually making anybody as safe as it sounds. You are stifling innovation, and innovation is a very, very important part of the AI equation.”
As the EU pursues safety measures for anticipated or perhaps imagined AI risks, it may be entering the AI race with a self-imposed handicap.