Meta has announced new initiatives to enhance transparency around AI-generated content on its platforms.
Writing in a blog post, Nick Clegg, Meta’s President for Global Affairs, emphasized the company’s commitment to labeling AI-generated images to help users discern between human-created and synthetic content.
Clegg stated, “As the difference between human and synthetic content gets blurred, people want to know where the boundary lies.” He further added that Meta has been “working with industry partners to align on common technical standards that signal when a piece of content has been created using AI.”
Meta is acknowledging the increasing prevalence of photorealistic AI-generated content, now widely known as deep fakes.
Taylor Swift was the latest high-profile victim of explicit non-consensual AI-generated images, which users traced back to a ‘challenge’ on 4Chan.
A scam orchestrated by video deep fakes extracted a multinational company $25.6 million on Monday.
The burden of responsibility for preventing these images from spreading falls squarely on social media companies like X and Meta.
Meta has been labeling photorealistic images created using its own Meta AI feature as “Imagined with AI” and plans to extend this practice to content generated by other companies’ tools.
The company says it’s developing tools capable of identifying invisible markers and metadata embedded in AI-generated images, enabling it to label content from a broad spectrum of sources, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
Meta is now poised to apply these labels across all supported languages on its platforms.
Clegg also highlighted the need for collaboration across the tech industry, writing, “Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI). The invisible markers we use for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices.”
Despite best efforts and advancements in clamping down on deep fakes, Clegg pointed out that detecting AI-generated audio and video remains challenging due to the lack of comparable signals.
But even when it comes to fake images, these can rack up millions of impressions before being comprehensively removed – as we saw in the latest Taylor Swift incident.
Other incidents, like commentator Ben Shapiro sharing a likely AI-generated image of a dead baby from the Israel-Palestine conflict, also picked up enormous worldwide attention before being countered.
As an interim measure, Meta will introduce a feature that allows users to disclose when they share AI-generated video or audio.
But who will use that if they’re posting content designed to ignite controversy?
In addition to labeling, Meta is exploring various technologies to enhance its ability to detect AI-generated content, even in the absence of invisible markers.
This includes research into invisible watermarking technology, such as the Stable Signature developed by Meta’s AI Research lab, FAIR, which integrates watermarking directly into the image generation process.
Elections upcoming, deep fakes intensifying
We’ve witnessed numerous incidences of political misinformation channeled through deep fakes.
Google, Meta, YouTube, X, and others are upping efforts to tackle deep fakes ahead of upcoming elections, including in the US and UK (most likely).
As AI-generated content becomes more sophisticated and widespread, the need for vigilance and innovation in content authentication and transparency grows.
Is a final solution in sight? It appears not.