As we hurtle towards crucial elections this year – no less the US election – we’re witnessing the evil twin of deep fake misinformation – “liar’s dividend.”
Deep fakes refer to incredibly life-like AI-generated copies of audio, video, or images.
There have been numerous examples of deep fakes impacting people and society, from an AI-generated image of the Pentagon tangibly affecting the stock market last year to a robocall voice mimicking President Biden this week.
However, deep fakes aren’t just useful for distorting the truth – they’re also excellent for refuting it. This has been termed “liar’s dividend,” essentially describing the benefits someone receives from brandishing authentic content as fake.
For instance, in response to a Fox News advertisement showcasing his public blunders last year, former President Donald Trump asserted that the footage was AI-created.
He denounced the ad in a post on Truth Social, saying, “The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using AI in their Fake television commercials to make me look as bad and pathetic as Crooked Joe Biden, not an easy thing to do.”
The Lincoln Project then refuted Trump’s claim, pointing out that the ad comprised well-documented incidents from his presidency.
Libby Lange, an analyst at analytics provider Graphika, echoes the concern of liar’s dividend, stating, “AI destabilizes the concept of truth itself. If everything could be fake, and if everyone’s claiming everything is fake or manipulated in some way, there’s really no sense of ground truth.”
Globally, politicians are now regularly using AI as a scapegoat. For example, a controversial video of a Taiwanese politician suggesting an extramarital affair was quickly dismissed as potentially AI-generated.
Similarly, in the Indian state of Tamil Nadu, a politician refuted a leaked audio recording as “machine-generated,” though its authenticity remains uncertain.
We’ve also seen AI images of war crimes in Israel, Palestine, and Ukraine, such as an incinerated baby, which Ben Shapiro shared with millions of followers before it was discredited as fake.
There have been non-political examples, too, like a voice clip allegedly from a Baltimore County school principal making racist remarks. These were claimed to be AI-generated, but confirming that without further context is challenging.
Deep fakes were a hot topic at the recent World Economic Forum in Davos, but efforts to curb them thus far have been shallow and ineffective. Tech companies are exploring ways to automatically verify AI-generated content, but only experts can reliably discern real from fake media.
And even the experts get it wrong, as there are instances where specialist AI detectors come to contested conclusions about an image’s authenticity.
Then, suppose images are ‘officially’ classified as real or fake, but the verdict is wrong – you can only imagine the hell that might break out.