Tech companies band together to tackle deep fake electioneering

February 16, 2024
deep fake

At the Munich Security Conference, a coalition of 20 tech giants, including OpenAI, Meta, Microsoft, and others, declared a joint effort to combat deceptive AI content influencing elections worldwide. 

This comes amid growing concerns that AI-generated deep fakes could manipulate electoral processes, particularly as major elections loom across several countries this year. 

We’ve already seen deep fakes play a part in the Pakistan, Indonesia, Slovakia, and Bangladesh elections, at least.

This new agreement encompasses commitments to develop tools for detecting and addressing misleading AI-generated media, raising public awareness about deceptive content, and taking swift action to remove such content from their platforms. 

The truth is, though, we’ve heard this many times. So what’s different now?

While specifics on implementation timelines remain vague, companies emphasized the need for a collective approach to tackle this evolving threat. 

Tech companies pledged to deploy collaborative tools to detect and mitigate the spread of harmful AI-generated election content, including techniques like watermarking to certify content origin and alterations. They also committed to transparency regarding their efforts and assessing the risks posed by their generative AI models.

“I think the utility of this (accord) is the breadth of the companies signing up to it,” said Nick Clegg, president of global affairs at Meta Platforms. 

“It’s all good and well if individual platforms develop new policies of detection, provenance, labeling, watermarking and so on, but unless there is a wider commitment to do so in a shared interoperable way, we’re going to be stuck with a hodgepodge of different commitments.”

Again, nothing we’ve not heard before. There have been several cross-industry agreements, yet no substantially effective plan to stop deep fakes. 

For example, MLCommons collaborated with Big Tech to define safety benchmarks, companies committed to watermarking, and joined the Frontier Model Forum, again, to establish a ‘unified approach.’ Those three industry-wide agreements just sprung to mind, but there are certainly many others.

Deep fakes are not easy to detect, particularly not at scale. They’ve become so close to the real thing that identifying them using AI or algorithmic techniques is exceptionally difficult

Tech companies have responded by tagging content with metadata, identifying it as AI-generated, but how does that identify the purpose of the image? 

Metadata is easily stripped from a file, too. Also, there will always be AI companies that don’t play ball with agreements and ways to outflank controls in place. 

Dana Rao, Adobe’s Chief Trust Officer, explained how and why this content was effective, stating, “There’s an emotional connection to audio, video, and images,” he said. “Your brain is wired to believe that kind of media.”

Indeed, deep fakes seem to spread long after being declared fake. While it’s difficult to quantify quite how much they change our behavior, the sheer scale of their impact – with the content being viewed by millions of people at a time – isn’t something you can take any risks with. 

The fact of the matter is that we can expect more AI-related deep fake incidents and controversies.

Individual awareness and critical thinking will be humanity’s biggest weapon to combat negative impacts.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions