The EU is encouraging big tech firms that incorporate generative AI into their products, such as TikTok, Meta, Google, and Microsoft, to label services that could spread disinformation.
On Monday, the EU Commission Vice President Vera Jourova highlighted the ability of generative AI to circulate disinformation through fake news and deep fakes. The month of May saw a slew of deep fakes go viral across social media, including a fake photo of an explosion at the Pentagon, which saw the US S&P 500 stock market dip by 0.3%.
Jourova said generative AI introduces “fresh challenges for the fight against disinformation” and raises “new risks and the potential for negative consequences for society.”
She said companies who signed up for the EU’s voluntary code to fight disinformation should start to “clearly label” services that risk disseminating misinformation.
The rush to regulate AI
These comments come ahead of the EU Digital Services Act, due to come into force in August, which obligates ‘major online platforms’ operating the EU (defined as platforms with more than 55 million users) to tackle harmful content and increase user privacy. AI has evolved since the Act was first drafted, so the EU updated it with rules for clamping down on fake accounts and deep fakes.
The EU AI Act, a more detailed and far-reaching piece of legislation designed to regulate AI, goes to a plenary vote on the 14th of June, after which it enters the final legislative process. It will likely come into force in 2026, which both AI tech leaders and EU officials realize is not soon enough.
Remarkably, it’s not only politicians that are alarmed by AI’s rapid evolution but the CEOs of firms such as OpenAI and Google, who separately discussed a potential ‘AI pact’ or ‘stop gap’ rules to slow down AI development.
Last month, OpenAI CEO Sam Altman encouraged Congress to regulate AI and later said at a conference, “I think it’s going to get to a good place. I think it’s important that we do this. Regulatory clarity is a good thing.”
Will big tech self-regulate AI?
Self-regulation depends on voluntary action and cross-collaboration between the AI protagonists of OpenAI, Microsoft, Alphabet, etc.
Is that realistic? Well, the EU’s most recent voluntary framework for disinformation had a high-profile absentee: Twitter.
Twitter crashed out of the EU’s voluntary code and later refused to make further comments. The EU’s Thierry Breton, European Commissioner for Internal Market, said, “you can run but you can’t hide,” and Jourova said, “Twitter has chosen the hard way. They chose confrontation.”
OpenAI, Google, Microsoft, and others have all discussed voluntary AI frameworks or pacts, but Twitter’s actions highlight that it could only take one to abandon ship and throw the whole crew into disarray.
Not only does self-regulation require AI leaders to come around the table together and put competitive differences aside, but it also requires them to potentially reveal some of their innermost workings.