Meta has announced a significant policy shift in its handling of political advertising, requiring political advertisers to openly disclose the use of third-party AI software in their ads.
This is specifically for ads that feature synthetically produced depictions of people and events that could impact political or social issues.
Additionally, Meta will prohibit using its own AI-assisted software for creating ads related to political and social issues and those concerning housing, employment, credit, health, pharmaceuticals, or financial services.
Remarkably, Meta released a generative AI platform explicitly designed for generating ads with AI in October, but this is still usable outside the listed niches and topics.
There is growing evidence that AI-generated deep fakes are skewing perceptions and opinions.
Speaking of a widely circulated image of Donald Trump hugging infectious disease expert Dr. Anthony Fauci, Vince Lynch, an AI developer, said, “It’s gotten to be a very difficult job for the casual observer to figure out: What do I believe here?” He continued, “The companies need to take responsibility.”
The move to ban AI ads is part of Meta’s broader initiative to mitigate potential risks associated with generative AI in advertising, especially within sensitive and regulated domains.
It also reflects a continuation of the company’s fraught history with political advertising, which has been a hotbed of controversy, especially after the 2016 election cycle.
Meta’s founder and CEO, Mark Zuckerberg, has previously come under fire for the platform’s handling of political misinformation. Despite criticism, Zuckerberg has maintained that allowing a wide berth for political speech is essential for free discourse.
Meta’s leadership, including Nick Clegg, its president of global affairs, has since sought regulatory guidance on these issues rather than self-imposing strict rules.
The current system requires political ad runners to undergo an authorization process and include a “paid for by” label on the ads.
The forthcoming AI policy will extend these transparency efforts. It will demand that campaigns and marketers indicate whether AI tools were used to modify the ads. Ads upfront about AI usage will be allowed to run with a note of this usage included.
Meta has stated it won’t require disclosure for changes it deems “inconsequential or immaterial,” such as simple photo retouching. Ads that appear to use AI without appropriate disclosure will be outright rejected.
The company also warned that organizations that try to bypass this disclosure requirement repeatedly will face penalties, though specifics remain unknown.
Meta’s decision to ban AI tools for certain ad categories could also be a strategic move to avoid potential legal challenges. The company has previously faced legal action, notably in 2019 when the Justice Department sued it for allowing discriminatory ad targeting practices.
The lawsuit was eventually settled, with Meta agreeing to modify its ad technology and pay a fine.
Google had already updated its policies in September, and other companies are expected to follow suit with similar rules, with concerns about AI adverts ramping up ahead of the US presidential election in 2024.