Trump supporters have been caught creating and disseminating AI-generated images that falsely depict him alongside African American voters.
This tactic, first uncovered by the BBC, aimed to fabricate a sense of Trump’s popularity among black voters – a demographic that played a crucial role in Joe Biden’s 2020 victory.
Mark Kaye, a conservative radio show host in Florida, was among those who created these images, depicting Trump surrounded by black people and sharing them widely on social media.
Kaye’s approach to this matter was quite straightforward. “I’m not a photojournalist. I’m a storyteller,” he explained.
Kaye took to X to defend himself, stating, “Guys! The Fake News @BBC has accused me of leading a “disinformation” campaign. Oh, the irony. That’s like me calling them “bald!”
Guys! The Fake News @BBC has accused me of leading a “disinformation” campaign. Oh, the irony. That’s like me calling them “bald!”https://t.co/SagEkC6Hmp
— Mark Kaye (@markkayeshow) March 4, 2024
Commenters were largely unsympathetic, with one saying Kaye has been “caught with his pants down.”
Unsurprisingly, these images raised ethical concerns, prompting a response from Cliff Albright, the co-founder of Black Voters Matter.
Albright criticized the manipulation, stating, “There have been documented attempts to target disinformation to black communities again, especially younger black voters.”
While it might be tempting to assume these images are readily dismissable as fake, the BBC found many who thought they were real. Awareness of deep fakes remains unquantified.
Critics from across the political spectrum argue this tactic not only misrepresents political realities but also deliberately targets vulnerable segments of the electorate.
Deep fakes are taking elections by storm. We’ve seen large-scale campaigns in the Pakistani, Indonesia, Slovakian, and Bangladeshi elections, among others. We’ve also observed deep fake campaigns from foreign state actors aiming to manipulate voting behaviors.
Now, attention turns to the US election, which has already been a testing ground for deep fakes. The heat will only crank up as we approach polling day.
AI bot assistant ‘Jennifer’
On the technological frontier of campaign strategies, Peter Dixon, a Democratic congressional candidate from California, employed an AI bot named “Jennifer” to call voters, raising eyebrows within his own team.
Jennifer’s introduction to voters was clear and upfront: “Hello there. My name is Jennifer and I’m an artificial intelligence volunteer.” This is part of Dixon’s broader strategy to reach a wide audience and isn’t the first AI robocaller, with the first being deployed in Pennslyvania last year.
The results of employing Jennifer were surprisingly positive, challenging initial skepticism. Dixon himself was taken aback by how well it worked, commenting on the public’s reaction: “People were shocked at how good the capability was.”
Deep fake electioneering highlights the dual-edged nature of AI’s role in modern political campaigns.
While AI offers innovative tools for engagement, it also poses ethical challenges, particularly when used to fabricate or manipulate political support.
Drawing lines on fair usage of AI in political scenarios has proved nigh impossible. US regulators have discussed banning various deep fake campaign materials, but this hasn’t materialized.