Deep fakes wreak havoc amid the Israel-Palestine conflict

October 28, 2023

AI Israel

2023 has witnessed a deluge of deep fake images created with AI image generators, from a fake image of an explosion at the Pentagon to images of child abuse. 

Deep fake technology also generates robust copies of virtually anyone’s voice, from Johnny Cash singing Barbie Girl to fake speeches from ex-Sudanese president Omar al-Bashir. 

It’s perhaps unsurprising, then, that deep fakes have percolated the already fraught digital landscape surrounding the current Israel-Hamas conflict. 

Overall, analysts believe AI misinformation has been limited – there is enough real visceral content circulating on social media. However, the mere prospect of AI-generated fakes has led people to question the legitimacy of authentic images, videos, and audio clips.

This is called “liar’s dividend.” Real content can now potentially be dismissed as AI-generated, thus falsely rendering it fake in the eyes of onlookers.

Social media creates the perfect arena for suspicion and paranoia surrounding digital content to spread like wildfire – the opportunity to label something that appears real as fake, and vice versa, is something many will use to reinforce their views. 

As Bill Marcellino from the RAND Corporation underscores this dilemma, “What happens when literally everything you see that’s digital could be synthetic?” 

Similarly, Hany Farid, a digital forensics and AI misinformation specialist, commented on the Israel-Hamas conflict, “Even by the fog of war standards that we are used to, this conflict is particularly messy… The specter of deepfakes is much, much more significant now — it doesn’t take tens of thousands, it just takes a few, and then you poison the well and everything becomes suspect.”

What’s remarkable is how rapidly deep fake tech has evolved. In 2022, identifying inconsistencies in AI-generated images was simple and intuitive, whereas now, it often requires specialist analysis. 

As emotionally charged discussions about Gaza continue, especially on social media platforms, the erosion of trust becomes more evident.

Fake content draws controversy and, therefore, exposure

The issue with questionably fake content is that it draws debate and, in turn, impressions, likes, and comments 

For example, a post on X, which attracted 1.8 million views, falsely depicted Atletico Madrid fans displaying an enormous Palestinian flag. Users pointed out the distorted figures in the image, indicating AI generation. 

Another post from a Hamas-linked account on platform X, as reported by the New York Times, inaccurately portrayed a tent encampment for displaced Israelis, featuring an incorrect flag design. This post was subsequently removed. 

Greater attention and scrutiny were directed towards footage with no apparent signs of AI manipulation. Examples include a video of a Gaza hospital director holding a press conference, which some labeled as “AI-generated” despite multiple sources capturing the event from different angles.

In an attempt to separate truth from AI-generated content, some social media users have turned to detection tools. These tools claim to identify digital manipulation but have proven inconsistent and unreliable.

In the early days of the conflict, Prime Minister Netanyahu shared a series of images on platform X, alleging they depicted “horrifying photos of babies murdered and burned” by Hamas. 

When conservative commentator Ben Shapiro drew attention to one of the images on platform X, he faced widespread accusations of spreading AI-generated content. We won’t embed the Tweet directly due to the image’s horrific nature, but you can view the debate on Shapiro’s post here

A post, which amassed over 21 million views before being taken down, claimed to have evidence that the image of the baby was a fake, presenting a screenshot from AI or Not, a detection tool, labeling the image as “generated by AI.” 

The company later revised this assessment on platform X, explaining that the result was “inconclusive” due to image compression and alterations that obscured identifying details. The company also announced it had refined its detection tool in response.

Anatoly Kvitnitsky, CEO of AI or Not, reflected on the ethical implications of their technology: “We realized every technology that’s been built has, at one point, been used for evil… We came to the conclusion that we are trying to do good, we’re going to keep the service active and do our best to make sure that we are purveyors of the truth. But we did think about that — are we causing more confusion, more chaos?”

This all adds up to an incredibly densely confusing situation that amplifies emotions already pushed beyond the limit. 

AI both gives and takes away from public debate and truth. How do people adjust to its presence on social media, and can tech companies ever take back control?

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions