Just when you thought you were getting accustomed to spotting fake news, we now have to deal with fake images popping up on social media.
There was an explosion at the Pentagon on Monday 22nd of May – or at least that’s what you might’ve believed if you saw this photo circulating on Twitter.
It wasn’t until observers zoomed into the building and railings that they realized this was, in fact, an AI-generated image. If you look carefully, you can see they blend into each other – but it’s far from obvious.
Confident that this picture claiming to show an “explosion near the pentagon” is AI generated.
Check out the frontage of the building, and the way the fence melds into the crowd barriers. There’s also no other images, videos or people posting as first hand witnesses. pic.twitter.com/t1YKQabuNL
— Nick Waters (@N_Waters89) May 22, 2023
The Arlington Fire Department quickly denounced the images as fake, but that wasn’t before the story was picked up by Russia Today – who promptly deleted it – and Indian media outlets News18 MP, First India News, Times Now Navbharat, and Zee News.
Shockingly, the reaction was substantial enough to cause the US stock market to fall by 0.3% between 10.06 am and 10.10 am ET.
Deep fakes on the rise
Deep fakes are nothing new, but this one wasn’t as benign as the Pope wearing a Balenciaga puffer jacket.
This latest deep fake fiasco also highlights the danger of “Twitter Blue,” as several accounts that shared the story carried a blue check mark, including one impersonating Bloomberg. Twitter Blue has been criticized for providing credibility to accounts impersonating celebrities, businesses, and even government agencies.
Some AI tools can detect fake images, including Google’s forthcoming “About this image” tool.
However, the quality of AI image outputs will only improve, so it’s still down to humans to judge what’s real and what isn’t – for now, at least.