Several AI-generated deep fake videos of murdered and abused children recently appeared on TikTok.
One of the videos, which the channel called ‘educational,’ describes the tragic fate of James Bulger was abducted by Jon Venables and Robert Thompson, then aged 10, in 1993, in Merseyside, Liverpool.
Bulger, then just 2 years old, was horrifically abused and tortured before being found dead near a railway line. Venables and Thompson became the youngest individuals convicted of murder in the UK and the youngest convicted murderers of the 20th century.
Denise Fergus, the mother of murdered son James Bulger, has condemned the AI-generated videos of her deceased son as “beyond sick.”
These videos formed part of a series that employed AI technology to recreate missing or murdered children, including Madeleine McCann, who vanished during a family vacation in 2007; 11-year-old Rhys Jones, mistakenly gunned down by a gang member in Liverpool; and Peter Connolly, also known as ‘Baby P,’ who died of neglect and abuse.
Some of these videos racked up tens of thousands of views before their removal and were uploaded without any content warnings.
Speaking to The Mirror, Denise Fergus, James Bulger’s mother, called the videos “disgusting.”
Fergus noted that while she had no problem with reporting on the cases, she found it “absolutely disgusting” to see her deceased son’s face used in such a manner.
“It is bringing a dead child back to life. It is beyond sick,” she declared. “Who can sit there and think of such a thing?” she asked, adding, “It is not fair on the people who have lost children, or lost anyone.”
Her husband, Stuart Fergus, contacted the account, apparently based in the Philippines, requesting the removal of the offensive content. The response he received stated they “do not intend to offend anyone” and that they aimed to ensure such incidents “will never happen again,” along with a call to support and share their page.
TikTok responded to the controversy, telling the BBC that the offending videos had been removed, asserting there was “no place” on the app for such content.
A TikTok spokesperson highlighted the company’s community guidelines, stating they “do not allow synthetic media that contains the likeness of a young person” and promised to continue taking down such content.
Deep fakes take a turn for the sinister
There is a growing narrative that AI’s futuristic risks – such as seizing control of critical infrastructure or otherwise turning against its creators – are a distraction from immediate-term impacts such as deep fakes, disinformation, and cyber crime.
As deep fakes become increasingly sophisticated, it will become harder to discern the digital from the physical. Like so many things in the AI world, the impacts of such are unprecedented.