Australian authorities launch investigation into explicit AI deep fakes

June 13, 2024

  • Australian authorities are investigating a deep fake incident at a Melbourne school
  • Explicit images of some 50 children were created, according to the police
  • Deep fake incidents targeting young children have become more common
Deep fakes

Police in Australia launched an investigation into the distribution of AI-generated pornographic images of around 50 schoolgirls, with the perpetrator believed to be a teenage boy. 

In an interview with ABC on Wednesday, Emily, the mother of a 16-year-old girl attending Bacchus Marsh Grammar, revealed that her daughter was physically sickened after viewing the “mutilated” images online.

“I collected my daughter from a sleepover, and she was extremely distressed, vomiting because the images were incredibly graphic,” she explained to ABC Radio Melbourne.

The school issued a statement declaring its commitment to student welfare, noting that it’s offering counselling and cooperating with the police.

“The wellbeing of our students and their families at Bacchus Marsh Grammar is a top priority and is being actively addressed,” the school stated.

This comes as the Australian government is pushing for stricter laws for non-consensual explicit deep fakes, increasing prison sentences for generating and sharing CSAM, AI-generated or otherwise, to up to seven years.

Explicit deep fakes on the rise

Experts say online predators frequenting the dark web are increasingly harnessing AI tools – specifically text-to-image generators like Stability AI – to generate new CSAM.

Disturbingly, these CSAM creators sometimes fixate on past child abuse survivors whose images circulate online. Child safety groups report finding numerous chatroom discussions about using AI to create more content depicting specific underage “stars” popular in these abusive communities.

AI enables people to create new explicit images to revictimize and retraumatize the survivors.

“My body will never be mine again, and that’s something that many survivors have to grapple with,” Leah Juliett, an activist and CSAM survivor, recently told the Guardian.

An October 2023 report from the UK-based Internet Watch Foundation uncovered the scope of AI-generated CSAM. The report found over 20,000 such images posted on a single dark web forum over a month. 

The images are often indecipherable from authentic photos, depicting deeply disturbing content like the simulated rape of infants and toddlers.

Last year, a Stanford University report revealed that hundreds of real CSAM images were included in the LAION-5B database used to train popular AI tools. Once the database was made open-source, experts say the creation of AI-generated CSAM exploded.

Recent arrests demonstrate the issue is not theoretical, and police forces worldwide are taking action. For example, in April, a Florida man was charged for allegedly using AI to generate explicit images of a child neighbor.

Last year, a North Carolina man – a child psychiatrist of all people – was sentenced to 40 years in prison for creating AI-generated child pornography from his patients. 

And just weeks ago, the US Department of Justice announced the arrest of 42-year-old Steven Anderegg in Wisconsin for allegedly creating more than 13,000 AI-generated abusive images of children.

Current laws are not enough, say lawmakers and advocates

While most countries already have laws criminalizing computer-generated CSAM, legislators want to strengthen regulations. 

For example, in the US, a bipartisan bill has been introduced to allow victims to sue creators of explicit non-consensual deep fakes. 

However, some gray areas remain to be addressed where it’s difficult to determine precisely what laws such activities break. 

For example, in Spain, a young student was found spreading explicit images of class members generated with AI. Some argued that this would fall under pedophilia laws, leading to harsher charges, whereas others said it couldn’t fulfill that criteria under current law. 

A similar incident happened at a school in New Jersey, showing how children might be using these AI tools naively and exposing themselves to extreme risks in the process. 

Tech companies behind AI image generators prohibit using their tools to create illegal content. However, numerous powerful AI models are open-source and can be run privately offline, so the box cannot be completely closed. 

Moreover, much of the criminal activity has also shifted to encrypted messaging platforms, making detection even harder.

If AI opened Pandora’s box, this is certainly one of the perils that lay inside it.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions