A deep fake video featuring Kari Lake, created by digital news outlet Arizona Agenda, surfaced online.
In the video, an AI-generated Lake endorses the Arizona Agenda, stating, “Subscribe to the Arizona Agenda for hard-hitting real news.”
It then delivers a message about the role of AI in elections: “And a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting.”
The video deceived even Hank Stephenson, Arizona Agenda’s co-founder and journalist, who admitted, “When we started doing this, I thought it was going to be so bad it wouldn’t trick anyone, but I was blown away.”
The video, though perhaps benign in its intent, backfired spectacularly. Lakes team responded with a cease-and-desist letter, demanding that the videos be removed from all platforms immediately. They warned that failure to comply would result in legal action.
Despite this legal pressure, Stephenson revealed he consulted with lawyers about how to respond. He believes these deep fakes are vital learning tools, stating, “Fighting this new wave of technological disinformation this election cycle is on all of us.”
You can view the video below. The description says, “This is a deep fake video of Kari Lake. It is NOT real and it was created without Ms. Lake’s permission. Ms. Lake does not endorse our publication in any way.”
A fair comment? Possibly, but creating self-promotional deep fakes of people without their permission isn’t a great way to go about it.
As one commenter on Reddit said, “It doesn’t matter who it is — deepfakes made to deceive people in terms of politics is dangerous and wrong.”
The issue extends beyond individual instances, as seen in the broader political arena. Donald Trump himself has previously accused opponents of using AI-generated content against him, showing how deep fakes are both a weapon and a vulnerability.
AI misuse in elections is an international trend, hitting countries from Slovakia to Indonesia. Digital tactics for influencing political outcomes through deception have become more diverse and realistic.
Regulations, including the Federal Communications Commission (FTC) banning certain AI-generated robocalls and forming a bipartisan task force to explore AI regulation, indicate steps towards addressing deep fakes.
However, with AI technology advancing swiftly, the FEC has yet to establish rules governing AI in political ads.
AI is moving faster than legislators, and the next controversy is probably imminent.