A detailed investigation by BBC journalists has unearthed an alarming trend of AI-generated YouTube videos filled with false “scientific” information aimed at misleading young viewers.
The BBC team discovered “more than 50 channels in more than 20 languages spreading disinformation disguised as STEM [Science, Technology, Engineering, Maths] content.”
These channels specialize in pseudo-science, false information, and conspiracy theories ranging from claims of electricity-producing pyramids to denial of human-caused climate change and discussions about extraterrestrial beings.
YouTube’s algorithms were found to recommend these videos to younger viewers.
AI has simplified the process of creating YouTube videos using automated methods, leading to concerns that the quality of content on the platform is declining.
AI’s role in spreading disinformation
The BBC’s analysis revealed that the majority of these misleading videos had employed generative AI programs, like ChatGPT and MidJourney, to produce text, images, and narrations.
YouTuber and science educator Kyle Hill had firsthand experience with this issue, stating, “The creators appear to have stolen and manipulated accurate content and then republished them.”
He further commented, “These channels seemed to have identified the exact right thing to maximize views for the least amount of effort.”
The BBC team conducted an experiment to understand the extent of children’s exposure to this misleading content.
They set up child accounts on YouTube and found that these AI-generated “bad science” videos were indeed recommended to these accounts.
The investigation also gauged reactions to the videos by showing them to two groups of 10-12-year-olds.
One video focused on UFO and alien conspiracies, whereas another falsely claimed the Pyramids of Giza were used for electricity generation.
One child said, “I enjoyed watching it,” “At the beginning, I wasn’t sure aliens exist, but now I think they do.”
Some detected the fake content; “I found it quite funny that they didn’t even use a human voice, I thought it wasn’t human,” one child said.
AI’s interaction with children has been hotly debated in recent weeks, with 50 US state prosecutors banding together to warn Congress of the technology’s dangers to younger generations earlier this month.
In August, the World Economic Forum (WEF) warned that policy addressing AI’s role in children’s lives is grossly lacking.
YouTube pointed out that it recommends YouTube Kids for those under 13, claiming it has a “higher bar” for the quality of videos that can be shown.
However, they did not comment on questions regarding the advertising revenue generated from these misleading videos.
Claire Seeley, a primary school teacher in the UK, expressed her concerns about the future.
“We don’t have a really clear understanding of how AI-generated content is really impacting children’s understanding. As teachers, we’re playing catch up to try to get to grips with this.”
Professor Vicki Nash, Director of the Oxford Internet Institute, raised ethical concerns: “The idea that YouTube and Google are making money off the back of adverts being served with pseudo-science news seems really unethical to me.”
As AI continues to evolve, the challenges posed by AI-generated disinformation are only expected to escalate, requiring vigilant action from tech companies, educators, and parents alike.
While regulation will increase pressure on social media platforms to act on AI-generated content, it’s a tall order when detecting such content in the first place is notoriously difficult.