AI-generated counterfeit nudes of students from Westfield High School in New Jersey, US, were distributed among peers, and there’s no clear route of legal action to take against the perpetrators.
Principal Mary Asfendis described the situation in an email to parents as a “very serious incident.”
The school remains silent on the specifics of the incident, including the count of individuals affected or any disciplinary measures against the perpetrators, citing confidentiality concerns.
The email sent by the principal was leaked to the press and said:
“I am writing to make you aware of a situation that resulted in widespread misinformation and resulted in significant worry and concern amongst the student body of Westfield High School. Earlier today, students brought to our attention that some of our students had used Artificial Intelligence to create pornographic images from original photos. There was a great deal of concern about who had images created of them and if they were shared.
At this time, we believe that any created images have been deleted and are not being circulated. This is a very serious incident. We are continuing to investigate and will inform individuals and families of students involved once the investigation is complete. This will happen before the weekend. We made counseling available for all affected students and encouraged them to return to class when they felt able to do so. Additionally, our School Resource Officer and the Westfield PD have been made aware of our investigation. If a parent/guardian thinks their child is a victim of a criminal act in relation to this incident, please report the matter to Westfield Police.
I wanted to make you aware of the situation, as, in addition to harming the students involved and disrupting the school day, it is critically important to talk with your children about their use of technology and what they are posting, saving, and sharing on social media. New technologies have made it possible to falsify images and students need to know the impact and damage those actions can cause to others.
We will continue to educate your children on the importance of responsible use of technology and hope you reinforce these messages at home.”
This situation is not unique to Westfield High School. Spain experienced a similar occurrence in September, and public figures and various online influencers have also been entangled in deepfake pornography controversies.
In New Jersey, there is an absence of legislation penalizing the creation and distribution of AI-created fake nudes.
Governor Phil Murphy, on October 10, unveiled an AI task force to probe the technology’s benefits, growth, and risks, but a definitive law addressing the issue remains non-existent.
President Biden’s executive order on AI regulation proposes measures for supervision, establishment of safeguards, and technological investments. However, it falls short of instituting a blanket federal law against deep fake nudes, and there’s no timeline for such legislation.
Bradley Waldstreicher, associate editor at Cardozo Law Review, underscored the complexity of federal deep fake legislation in May 2021, stating: “If there is federal deepfake legislation passed that specifically targets pornographic deepfakes, then, in order to be constitutional, pornographic deepfakes may need to fit into the First Amendment categorical exception of being a separate category like child pornography….[T]here is no clear solution. At the very least, before substantive federal legislation is passed, researchers need to conduct more studies to learn about the impact that pornographic deepfakes have on victims and internet users.”
A closer look at the rise of AI-generated fake nudes reveals staggering numbers. According to the Washington Post, citing Genevieve Oh, an industry analyst, there has been a 290% increase in such content since 2018 on prominent websites hosting AI-generated porn photos.
A UK investigation by the BBC revealed the astonishing rise of deep fake pornography, including that of children, on sites such as Patreon.
In 2023 alone, 143,000 deepfake porn videos were posted across popular platforms, generating over 4.2 billion views – a significant jump from the previous years. Sensity AI reports that 90 to 95% of these videos feature non-consensual female imagery.
The FBI also highlighted a surge in “sextortion” cases, with 26,800 US victims reported through September this year, marking a 149% increase from 2019.
Deep fakes are one of AI’s most imminent and pressing risks, and they’re exceptionally tricky to contain.
Fakes popping up throughout the Israel-Palestine conflict also highlighted the potential of authentic images to be falsely labeled as AI-generated, an example of a “liar’s dividend” where real information can be disregarded as fake.
This also extends to AI-generated voices and videos, often targeting political leaders, with fears rising that this could influence voting behavior.