Deep fakes have rapidly become a serious social and political issue and a possible threat to election integrity.
Shortly before Slovakia’s closely run election, a contentious audio clip surfaced on social media, wherein a voice strikingly similar to Slovakia’s Progressive party leader, Michal Šimečka, spoke of a plan to manipulate the election.
Due to the post’s proximity to poll time, it was challenging to warn people that it was fake.
Not long after, another controversy brewed when the UK’s Labor Party leader was heard barking expletives on an audio clip posted on X, with the related post stating, “I have obtained audio of Keir Starmer verbally abusing his staffers at [the Labour Party] conference,” the X account posted. “This disgusting bully is about to become our next PM.”
In the same period, a TikTok account was removed after it was found to be producing fake clips of the imprisoned ex-Sudanese president Omar Al-Bashir, fanning the flames for a country already in civil war.
The clips in all three cases had something in common: they were fake and likely created with AI.
While AI video and images are ubiquitous and pose their own issues, fake audio clips are harder to detect and often slip through the net. The voices themselves tend to be highly authentic; as Jack Brewster of NewsGuard highlighted, “Obama still looks a little plasticky when bad actors use his face. But the audio of his voice is pretty good — and I think that’s the big difference here.”
In light of these concerns, a group of senators proposed the “No Fakes Act,” a bill aiming to penalize those who produce or share AI-created audio or videos without the original person’s consent.
As Hany Farid, professor of digital forensics at the University of California at Berkeley, says, “This is not hypothetical. You’re talking about violence, you’re talking about stealing of elections, you’re talking about fraud — [this has] real-world consequences for individuals, for societies and for democracies.”
Voice-cloning applications have evolved dramatically, with platforms like Eleven Labs offering tools that allow the creation of a ‘deepfaked’ voice for a nominal monthly fee of just $5.
Hany Farid, underscoring the accountability of social media giants, stated, “They could turn the spigot off right now if they wanted to. But it’s bad for business.”
Can AI deep fakes really sway elections?
Deep fakes have already demonstrated their real-world impact, with a fake image of an explosion at the Pentagon earlier in the year causing markets to temporarily dip, albeit for only a few minutes.
These fake clips, videos, and videos threaten an already fragile informational landscape – one where suspicions are often high and margins tight.
While assessing the true impact of AI deep fakes on political discourse and voting behavior is exceptionally difficult, it lowers the bar to disinformation.
Ben Winters, a senior representative at the Electronic Privacy Information Center, states, “Degrees of trust will go down, the job of journalists and others who are trying to disseminate actual information will become harder.”
Moreover, while falsely implicating politicians in scenarios that hurt their reputation is one of the more common tactics of these campaigns, they could also lead to “liar’s dividend,” where it becomes possible to refute truthful allegations as fake.
Combating AI deep fakes requires a combination of more advanced AI content filters on social media and public education. The quality of deep fakes themselves, however, is only going to improve.