A BBC investigation found that pedophiles are exploiting AI technology to generate and distribute realistic images of child sexual abuse.
These illicit AI-generated images are typically accessed through content-sharing sites like Patreon. The National Police Chief’s Council (NPCC) expressed indignation, highlighting the “outrageous” failure of some platforms to assume “moral responsibility” despite making substantial profits.
GCHQ, the UK government’s intelligence and cyber security agency, reacted to the report by stating, “Child sexual abuse offenders adopt all technologies and some believe the future of child sexual abuse material lies in AI-generated content.”
People use Stable Diffusion, an open-source image generation AI, to generate the images. Stable Diffusion allows users to input word prompts describing the desired image.
Critically, one version of Stable Diffusion is open-source under the Apache License 2.0, unlike Mid Journey and DALL-E, which prevent users from using prompts to generate graphic or illicit images. This gives creators free rein to create almost any image they want.
The BBC discovered instances where Stable Diffusion was used to create hyper-realistic depictions of child sexual abuse. The UK police force confirmed they’ve come into contact with illegal AI-generated material already.
Freelance researcher and journalist Octavia Sheepshanks, who investigated the case, shared her findings with the BBC via children’s charity the NSPCC. “Since AI-generated images became possible, there has been this huge flood… it’s not just very young girls, they’re [pedophiles] talking about toddlers,” she noted.
In another case, users were taking advantage of a VR porn platform to create avatars of underage women. One user allegedly took a photo of a real underage girl and superimposed it onto one of the models.
The UK law treats computer-generated “pseudo images” depicting child sexual abuse as it would real images – so their possession, publication, or transfer is illegal.
The NSPCC’s child safeguarding lead, Ian Critchley, argued that even though “synthetic” images don’t depict actual children, they can lead to real harm. He warned of a potential escalation in offenders’ actions “from thought, to synthetic, to actually the abuse of a live child.”
How did pedophiles create and distribute the images?
According to the investigation, the image distribution process involves 3 steps:
- Pedophiles generate images using AI software.
- These images are promoted on platforms like Pixiv, a Japanese picture-sharing website.
- Links are provided on these platforms, directing customers to more explicit images, which they can access by paying on sites such as Patreon.
Many image creators were active on Pixiv, primarily used by manga and anime artists.
However, the platform is hosted in Japan, where sharing sexualized cartoons of minors is not illegal, enabling creators to promote their work using groups and hashtags.
Pixiv banned all photo-realistic depictions of sexual content involving minors from the 31st of May. They also stated they had fortified their monitoring systems and are dedicating more resources to counteract such content.
Sheepshanks told the BBC that the issue involves “industrial scale distribution.” Moreover, comments on Pixiv indicate some are offering non-AI-generated abuse material.
Many Pixiv accounts provide links to their “uncensored content” hosted on Patreon, a US-based content-sharing platform valued at around $4bn (£3.1bn).
Patreon has over 250,000 creators, including renowned celebrities, journalists, and writers. The investigation discovered Patreon accounts openly selling AI-generated obscene child images, with prices varying according to the type of material.
Stability AI and Patreon reacts
Once shown an example, Patreon confirmed that it violated their policies, and the account was promptly removed.
Stable Diffusion, the AI image generator used in creating these images, was developed through a global collaboration involving academics and companies led by UK-based Stability AI. As mentioned, the open-source version of the software has no filters, enabling the production of any image.
Stability AI responded by reinforcing its prohibition against misuse of its software for illegal or immoral purposes, including child sexual abuse material (CSAM).
They declared to strongly support law enforcement efforts to combat the illegal use of its products.
The NPCC’s Ian Critchley expressed concerns that the influx of AI-generated or “synthetic” images could hamper the identification process of actual abuse victims. He said, “It creates additional demand, in terms of policing and law enforcement, to identify where an actual child, wherever it is in the world, is being abused as opposed to an artificial or synthetic child.”
Critchley added that society has reached a “pivotal moment” in determining whether the internet and technology will enable opportunities or become a source of greater harm.
The NSPCC echoed these sentiments, urging tech companies to take action against the alarming speed at which abusers have adopted emerging technologies.
Anna Edmundson, the charity’s head of policy and public affairs, argued that companies were aware of these potential risks but have not taken substantial action, emphasizing that there can be “no more excuses for inaction.”
A government spokesperson stated in response, “The Online Safety Bill will require companies to take proactive action in tackling all forms of online child sexual abuse including grooming, live-streaming, child sexual abuse material and prohibited images of children – or face huge fines.”
The UK’s Online Safety Bill was primarily drafted before generative AI ‘went mainstream’ but still intends to control instances of online AI-related harm.
Concerns surrounding deep fakes intensify
This draws further attention to AI’s role in deception and cyber crime. The rise of deep fakes, ultra-realistic AI-generated images, video, or audio, has brought about a host of ethical and security concerns.
Deep fakes use AI algorithms to extract patterns in facial movements, voice modulations, and body language from existing audio or video content. Using their analysis, AI can create highly convincing counterfeits that are nearly impossible to distinguish from the real thing.
Additionally, AI-generated child abuse will likely challenge existing legislation around the world. While AI-generated illicit child images are legally treated the same as real ones in the UK, this may not be the case elsewhere.
Lawmakers are having a hard time keeping up with AI, which is vastly more agile than clunky legislative processes.
In the meantime, fraudsters and criminals are blurring the lines between what’s real and what isn’t while exploiting gaps in the law.
The first high-profile case of AI-supported fraud occurred in 2019 when fraudsters used AI to impersonate a CEO’s voice to convince a senior executive to transfer €220,000 ($243,000), claiming it was for an urgent supplier payment. The executive believed he was speaking with the CEO, but in reality, he was interacting with a deep fake.
Earlier this year, a fake image of an explosion at the Pentagon circulated on social media, causing the US stock market to temporarily fall by 0.3%.
Many are concerned that AI will be used to create false news reports or defamatory content about political figures. Politicians are already using AI-generated content to bolster their campaigns.
Furthermore, there’s been an increase in non-consensual deep fake pornography, where individuals’ faces are superimposed onto explicit images or videos without consent.
The FBI warned about AI sextortion earlier in June, stating, “Malicious actors use content manipulation technologies and services to exploit photos and videos—typically captured from an individual’s social media account, open internet, or requested from the victim—into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites. Many victims, which have included minors, are unaware their images were copied, manipulated, and circulated until it was brought to their attention by someone else.”
A UK documentary, My Blonde GF, explores a real case of deep fake revenge porn that gripped the life of writer Helen Mort.
Technology companies such as Google, Twitter, and Facebook are developing detection tools to identify deep fake content, but as AI becomes more sophisticated, such mechanisms could become less reliable.
While progress is being made, individuals, businesses, and society as a whole must stay informed about the potential risks of deep fake technology, which has been highlighted as one of AI’s most imminent and pressing risks.