AI is being exploited to create child sexual abuse images at an alarming rate, sparking fears of an overwhelming threat to internet safety, warns the Internet Watch Foundation (IWF).
The UK-based watchdog has identified nearly 3,000 AI-generated images violating UK laws.
The IWF reported that AI models are being fed with images of actual abuse victims, generating new abusive content. Additionally, technology is being utilized to produce images of underage celebrities in abusive scenarios and ‘nudify’ photos of clothed children found online.
There are numerous international cases of deep fake child abuse, including a recent case where several Spanish female schoolchildren were targeted with sexualized AI-generated images. In June, journalists from the BBC exposed pedophilic AI images circulating on platforms like Patreon.
Susie Hargreaves, IWF’s Chief Executive, expressed that their “worst nightmares have come true,” emphasizing the acceleration of AI technology for malicious purposes.
The foundation previously highlighted the emerging threat during the summer and now confirms a significant increase in the incidence of AI-generated child sexual abuse material (CSAM).
In the month-long investigation, research into a dark web child abuse forum revealed that 2,978 out of 11,108 images breached UK law. 20% of content was categorized as the most severe form of abusive content.
Dan Sexton, IWF’s Chief Technology Officer, pointed out that Stable Diffusion, a widely available open-source AI model, was the primary tool discussed on the forum for creating CSAM.
As an open-source model, Stable Diffusion doesn’t necessarily come with the same guardrails as commercial image generation models such as DALL-E 3.
Addressing the exploitative AI-generated content on social media
Social media platforms have become hotbeds for sexually explicit and exploitative AI-generated content.
Popular social media platforms such as Facebook, Instagram, and TikTok are under scrutiny for hosting advertisements that sexualize AI avatars and promote adult-focused applications.
The advertisements are explicit and involve the sexualization of child-like figures, exploiting loopholes in content moderation systems.
Despite the efforts of Meta and TikTok to purge these advertisements, the AI-generated nature of the content often evades automated filters. The inconsistent age ratings of these apps, alongside their global availability, further complicates the issue.
As AI algorithms become more sophisticated, the creation of ultra-realistic images, videos, and audio poses a significant threat to individuals, businesses, and society at large.
The FBI’s warnings about AI sextortion earlier in the year, particularly involving minors, highlighted the urgency of addressing this issue. It only seems to be accelerating.