Some of the best AI image detectors can be fooled by adding texture to the image or by degrading the image slightly.
Even though AI-generated images haven’t been around for very long we know better than to trust our eyes. But it turns out we can’t even trust the machines that are supposed to tell us if an image is real or fake.
A recent experiment reported by the New York Times found that for the most part AI image detectors like Hive or Umm-maybe were able to correctly classify AI-generated images. That is until they added grain to the image.
When they degraded the images by adding a small amount of grain, or pixelated noise, then the detectors were mostly convinced that the AI-generated images were real. In one case a black-and-white image of a man was first correctly rated as having a 99% likelihood of being AI-generated. By adding a slight amount of grain the detector suddenly dropped that likelihood to 3.3%, essentially saying the image was almost certainly real.
When fake images of the Pope in a Balenciaga puffer fool some people it’s harmless fun. But when AI-generated images are used in elections and can’t be identified as fake then the implications could be more dire.
Ron DeSantis was recently criticized for using AI-generated images in his campaign. His campaign interspersed 3 real images of Trump with 3 AI-generated ones of Trump embracing and kissing Dr. Anthony Fauci. Agence France-Press was the first to point out the fakes.
If you’re even vaguely familiar with Trump’s politics then you don’t need software to tell you that a photo of him kissing Fauci is fake. But AI image detectors don’t use the context or subject matter to decide if a photo is real or not. They look at telltale pixel patterns that AI image generators produce when making an image.
It turns out that if you take those high-res images and degrade them just a little, then even the best AI fake detectors will often say the image is genuine. Even asking Midjourney to produce a poor-quality vintage photo is enough to fool some of these tools.
It’s an interesting development but how significant it is remains to be seen. The real-world consequences of AI-generated disinformation are unlikely to be avoided by foolproof detectors. A healthy dose of skepticism and educating people about the risks of trusting their eyes will likely be far more effective.