US regulators target AI-generated deep fakes ahead of 2024 elections

  • US Federal Election Commission (FEC) is discussing restricting AI-generated political ads
  • This comes as deep fake images worm their way into political adverts
  • Regulators and stakeholders will now embark on a 60-day consultation process
AI Election

The US Federal Election Commission (FEC) has proposed regulating AI deep fake content in political ads, emphasizing their potential for deception for the 2024 elections.

In a unanimous decision on August 10th, the FEC proposed a regulation designed to oversee the use of AI-crafted deep fake content within political advertisements. The initiative is primarily focused on the forthcoming 2024 US elections.

This initiates a 60-day public commentary period, offering stakeholders, specialists, and the public a platform to share their insights. 

This proposal originates from a petition submitted by the advocacy group Public Citizen

Addressing the potential challenges deep fakes might pose to democratic systems, Public Citizen president Robert Weissman argued that deep fakes distort truths and propagate deceptive information. 

Weissman said, “The FEC must use its authority to ban deep fakes or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity.”

There are already many examples of AI-generated content integrated into political adverts and campaign material. For instance, Florida Governor and Republican nominee Ron DeSantis circulated fake images depicting former President Donald Trump hugging Dr. Anthony Fauci. 

During the FEC discussion, Lisa Gilbert, the Executive Vice President of Public Citizen, outlined the need for a clearer understanding of how current laws on “fraudulent misrepresentation” might be applied to AI deep fakes. 

Notably, members from both chambers of the US Congress endorsed the initial petition by Public Citizen, signifying a shared appetite to protect democratic systems from AI deep fakes. 

In June, the American Association of Political Consultants declared the use of deep fake content in political campaigns a breach of their ethical code. However, firm legal measures against their use would be a considerably more robust intervention. 

The political use of deep fakes struck in the UK, too, where Prime Minister Rishi Sunak was depicted pouring a sub-standard pint of beer – a grave mistake to make as a British public servant. In reality, Sunak’s pint was satisfactory. 

Deep fakes have been repeatedly highlighted as one of AI’s most potent and immediate risks, creating a scenario where the public can’t easily distinguish between falsehood and reality.

© 2023 Intelliquence Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions


Stay Ahead with DailyAI


Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2023 Guide to Enhanced Productivity'.


*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions