AI image watermarks don’t work, and probably never will

October 3, 2023

deep fake AI

Companies are trying to use digital watermarks to identify AI images but recent research shows their efforts may be in vain.

A team of researchers from the University of Maryland tested state-of-the-art image watermarking techniques and the results weren’t good.

Soheil Feizi, professor of computer science at the University of Maryland, said his team tested every existing watermarking technique and broke them all.

The research paper acknowledges that watermarking appears to be the most promising defense against deep fakes but then outlines the deficiencies in current approaches.

The two main methods of watermarking are low and high perturbation.

The low perturbation method involves making a subtle change to an image that is imperceptible to the eye but still digitally detectable. These methods displayed a tradeoff between performance and false positives. The better the method was at identifying an AI fake, the more likely it was to wrongly identify a real image as a fake one.

High perturbation involves adding a visible element to the image which is considered a more robust approach. The researchers were able to remove all of these watermarks too. Their “model substitution adversarial attack” approach even removed the more subtle advanced tree ring watermarks.

As AI continues to power the creation and spread of disinformation we need an effective way to tell when an image or video was created by AI. However, this research concludes that an effective solution is looking increasingly unlikely, albeit not impossible.

Feizi’s team only tested the available watermarks. Google and Meta are working on their own watermark technologies but they were not available for testing yet.

Even so, Feizi feels that their research indicates that even the big tech companies won’t deliver a foolproof solution. A paper published by researchers from the University of California concluded that “All invisible watermarks are vulnerable.”

As generative AI improves it’s likely to become more difficult to identify an AI-generated image. So even if commercial AI image generators like DALL-E or Midjourney employ an effective watermark, there’s nothing that compels other models to do so.

The other issue highlighted in Feizi’s research is that they were able to introduce AI watermarks into images that were not AI-generated.

The issue with this is that when presented with photographic evidence, bad actors could apply an AI watermark to it in an effort to discredit the evidence.

We’ll have to see how Google and Meta’s watermarks fare against Feizi’s team’s brute-force attacks. For now, it seems we’ll have to keep reminding ourselves that seeing is not believing.

 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions