Google trials a digital watermark to identify AI images

August 30, 2023

Fake AI images can fool AI image detectors

Google’s DeepMind research lab has unveiled a beta version of SynthID, its new tool for watermarking and identifying AI-generated images.

Concern over deepfakes and the potential for AI to promote disinformation has highlighted the need for a way to detect AI-generated images. 

Conventional watermarks added to images for copyright purposes can be easily removed by cropping, adding filters, or editing. Even metadata can be stripped from images.

SynthID embeds the digital watermark directly into the pixels of the image. The number of edited pixels is so small that the watermark is imperceptible to the human eye. Scaling, cropping, or reducing the resolution of the image will not remove the watermark.

If you knew where the watermark was you could technically remove it but you would also damage the image.

 

Google is making SynthID available to some of its clients using its image generator, Imagen, on Google’s Vertex AI platform. 

Users who create an image with Imagen can add the watermark or use SynthID to detect if an image was created using Imagen. It’s not a foolproof method though as the post on DeepMind acknowledged.

“The tool provides three confidence levels for interpreting the results of watermark identification. If a digital watermark is detected, part of the image is likely generated by Imagen.”

The DeepMind blog post also said that “SynthID isn’t foolproof against extreme image manipulations.”

Will SynthID make any difference?

Google isn’t promoting the tool as a panacea for fake images but it’s a start. If the method proves effective and gains traction with other AI image generators then it may make it easier to spot a fake.

Even if a foolproof method is developed based on something like SynthID, how would you enforce it? There are so many AI image generators that lie outside any kind of regulation.

If you can’t stop people from using AI to generate images of child abuse, then how do you force them to add a watermark to fake images?

Another challenge for companies like Google is that, for a tool to promote trust, it has to be foolproof. If you tell people that a politically charged image has a 75% likelihood of being fake that doesn’t eliminate arguments over its authenticity.

Adding a watermark like SynthID to a fake image seems like a great idea, but what stops someone from adding a similar watermark to a genuine image?

Critical thinking and a healthy dose of skepticism may be our best defense against AI fakes for a while yet.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions