As Bangladesh prepares for its national elections in early January, an increasingly familiar threat is rearing its ugly head: AI-generated deep fake misinformation.
The Bangladeshi election is a heated contest between the present Prime Minister, Sheikh Hasina, and the opposition, the Bangladesh Nationalist Party.
Numerous reports draw attention to pro-government groups employing these tools to produce AI-generated news clips that are designed to sway public opinion.
For instance, in one such clip, an AI-created Bangladeshi news anchor criticizes the US, which echoes the current government’s position.
Another example is deep fake videos aiming to discredit opposition figures, such as showing them taking unpopular stances on sensitive issues like support for Gaza, a hugely important matter of debate in this 175 million-strong Muslim-majority country.
Deep fake misinformation and disinformation have arguably become AI’s most immediate and pressing risks. It was only this week that Russian President Vladimir Putin was confronted by an AI copy of himself in a press conference – a student was using it to make a point about deep fakes in a live televised Q&A session.
The Slovakian election suffered from social media disinformation in the last 48 hours before post time when the media is forbidden from discussing political news.
Major technology companies like Google and Meta are beginning to implement policies for political advertisements, including the requirement to disclose digital alterations. However, thus far, these measures seem weak, especially in regions that don’t attract as so attention from these corporations.
AI regulation, in general, is primarily Western dehate, with developing countries mostly lacking digital regulation and rights, leaving people and their data open to exploitation.
How does deep fake technology work?
In essence, ‘deep fake’ is a non-technical term that describes any extremely effective fake, typically created with generative AI tools.
An illustrative example is HeyGen, an AI video generator based in Los Angeles. For a modest subscription fee, this service allows users to create clips with AI avatars.
One such video, alleging US interference in the Bangladeshi elections, was circulated on social media platforms.
Since these AI avatars are so life-like, they often slip through social media filters, and the general public may not react immediately. Some fake images associated with the Israel-Palestine conflict reached millions of people before being comprehensively outed as fake, for example.
The absence of robust AI detection tools for non-English content exacerbates the problem.
As AI technology advances and becomes more accessible, the imperative for regulatory bodies and digital platforms is to develop and enforce effective measures to counteract disinformation.
Thus far, it’s proving an intractable task.