AI is reshaping political campaigns across the globe, presenting innovative strategies and potential for misinformation.
Politicians from Toronto to New Zealand and Chicago are already employing AI for their campaigns.
For example, a Toronto mayoral candidate utilized AI to generate dystopian images of homeless shelters, while a political party in New Zealand posted an AI-generated image of a fictitious robbery in a jewelry store. In Chicago, controversy arose when a Twitter account pretending to be a news outlet used AI to duplicate a mayoral candidate’s voice, suggesting he approved of police brutality.
Many are concerned that deep fakes could be used to smear politicians to damage their reputations. Existing protection mechanisms, including services claiming to detect AI content, seem largely ineffective.
The upcoming US 2024 presidential election is already witnessing the impact of AI. The Republican National Committee released an AI-generated video depicting worst-case scenarios post-Biden’s re-election announcement. The Democrats have also been experimenting with AI-generated fundraising messages.
Efforts to regulate AI are in progress, with bills introduced in Congress to require disclaimers on AI-generated political ads.
Meanwhile, the American Association of Political Consultants has declared the use of deep fake content in political campaigns a breach of their ethical code.
Regardless, some politicians, like Toronto mayoral candidate Anthony Furey, are leveraging AI. Furey used AI to generate content to bolster his tough-on-crime stance, despite the clear artificial nature of some images.
Political experts are concerned that AI could spread misinformation – which is one of its most pressing threats.
According to Ben Colman, the CEO of Reality Defender, unlabeled AI content can cause “irreversible damage” before it is addressed. “Explaining to millions of users that the content they already saw and shared was fake, well after the fact, is too little, too late,” he said.
Realistic deep fakes AI also raises fears about the “liar’s dividend” phenomenon, where politicians could dismiss authentic yet compromising footage.
As Georgetown University’s Center for Security and Emerging Technology research fellow Josh A. Goldstein explains, “If people can’t trust their eyes and ears, they may just say, ‘Who knows?’”