The Israel-Gaza conflict has become a backdrop for the evolving use of AI in creating deep fakes, which has become a problem for Israel, Gaza, and external observers.
Among the various images from Gaza, some are particularly jarring. “Bloodied, abandoned infants,” as they were described, gained notoriety online.
One image of an incinerated baby was shared by influencers, such as Ben Shapiro, reaching millions of people before it was thrown into controversy for being AI-generated.
Many of these fake images exhibit subtle yet revealing signs of digital manipulation, like “fingers that curl oddly, or eyes that shimmer with an unnatural light. ” However, detecting these signs in the spur of the moment is exceptionally tricky.
Moreover, the quality of images varies. Some are repurposed photos from different conflicts, while others are entirely AI-generated. In time, they’ll only become realistic.
As CEO of CREOpoint, Jean-Claude Goldenstein, describes, “It’s going to get worse — a lot worse — before it gets better.” His company has compiled a database of the most viral deep fakes from the conflict, showcasing AI’s growing role in fabricating realities in conflicts.
The primary purpose of deep fakes is to stir shock and distress, hence why they frequently target children to intensify emotional responses.
As Imran Ahmed, CEO of the Center for Countering Digital Hate, explains, “The disinformation is designed to make you engage with it.”
Deep fakes are ever-rising
Deep fakes are a pervasive effect, or side effect, of AI development. Not only do AI deep fakes risk being viewed as authentic, but they also risk real content being viewed as fake.
Moreover, this phenomenon certainly isn’t limited to the Gaza conflict. Similar AI-generated content surfaced during Russia’s invasion of Ukraine in 2022, including an altered video of Ukrainian President Zelenskyy.
Other examples include fake images of politicians, such as Trump hugging Fauci. And the issue doesn’t stop at images – fake voices have implicated politicians in conversations they didn’t participate in or didn’t occur at all.
Tech firms worldwide are responding by developing AI filters that can detect deepfakes, authenticate images, and analyze text for misinformation. Google, YouTube, and Meta recently vowed to label AI-generated content, but achieving this technically is challenging.
We’ve also witnessed a deluge of fact-checking services designed to highlight dubious content, but their processes are fallible and slow. And what are the implications of a ‘reputable’ fact checker incorrectly labeling something as real or fake?
With crucial elections on the horizon, no less the 2024 US presidential election, deep fakes are stirring intense paranoia. Their impacts have not reached a ceiling yet.