Google’s updated terms relating to the use of AI-generated or other “inauthentic” material in political advertising will go into effect in mid-November.
The new terms require political adverts to include a “clear and conspicuous” disclosure if the ads include “synthetic content that inauthentically depicts real or realistic-looking people or events.”
The policy applies to image, video, and audio content. The disclosure will not be required if, for example, AI is used for editing that has no bearing on the authenticity of the claims made in the ad.
With the US presidential elections just over a year away, discussions over AI and its potential to supercharge political disinformation are hotting up. Google’s new terms don’t specifically mention AI, but the advances of generative AI tools are likely at the core of its latest move.
The examples in its updated terms make it pretty clear what kind of ads would need the prominent disclosure to be displayed:
- An ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do
- An ad with synthetic content that alters footage of a real event or generates a realistic portrayal of an event to depict scenes that did not actually take place
Over the last few months, we’ve already seen some political content that would fall foul of these new terms.
Like this AI-generated video that purported to show what a dystopian future would look like should Biden be re-elected in 2024.
If you look really carefully at the top left corner of the video you’ll see text that says “Built entirely with AI imagery.”
The wording of the disclosure may be fine, but it would have to be a lot bigger and more conspicuous if it were to meet Google’s requirements.
Ron DeSantis would have had to add a similar disclosure to his campaign ad that used AI to generate images of Donald Trump embracing Anthony Fauci. It gathered a fair amount of attention from US regulators earlier this year.
— The Verge (@verge) June 8, 2023
Truth and politics don’t have a great history, and during campaign season politicians tend to exercise their creative license more than usual.
The line between political satire, messaging, and intentional disinformation can get a little blurred at times. A lot of people would immediately recognize the Biden video or picture of Trump with Fauci as obviously inauthentic.
Is it patronizing to the electorate to have to put a big “Danger AI!” label on content like this? Then again, critical thinking in general doesn’t seem to be keeping pace with AI’s capabilities.
And as good as we may think we are at spotting a fake, our confirmation bias is often squarely on Team AI’s side.
For now, platforms like Meta and X haven’t added similar requirements for their political ads. As the elections near we’ll probably see some interesting AI-generated political ads that may compel them to follow suit with Google.