Violent extremists are utilizing generative AI tools to create content at scale, circumventing guardrails and filters in the process, according to a new report.
Intelligent firm Tech Against Terrorism identified around 5,000 examples of AI-generated content, including images linked to groups like Hezbollah and Hamas.
These images often relate to sensitive topics like the Israel-Hamas war, indicating a strategic use of AI to influence narratives.
There have been numerous examples of AI-generated images implicating both Israel and Palestine in atrocities and events they didn’t play a hand in. Deep fakes pose a dual threat: fake images can be interpreted as real, and real images as fake.
Adam Hadley, the executive director of Tech Against Terrorism, stated, “Our biggest concern is that if terrorists start using gen AI to manipulate imagery at scale, this could well destroy hash-sharing as a solution. This is a massive risk.”
📰 In the news | Here’s how violent extremists are exploiting generative AI tools
— Tech Against Terrorism (@techvsterrorism) November 10, 2023
From neo-Nazi groups using racist and antisemitic AI-generated imagery to the Islamic State (IS) publishing guides on using generative AI tools securely, the scope of AI’s misuse is ever-widening.
Extremist groups are also utilizing AI to create multilingual propaganda and personalized recruitment messages.
The problem extends beyond extremist propaganda. For instance, the Internet Watch Foundation reported over 20,000 AI-generated images of child sexual abuse on a single dark web forum in one month, underlining the broader implications of AI misuse.
Hadley assured that there are mechanisms for halting this trend, suggesting, “We’re going to partner with Microsoft to figure out if there are ways using our archive of material to create a sort of gen AI detection system in order to counter the emerging threat that gen AI will be used for terrorist content at scale.”
Hadley suggests that such a tool could be made available across multiple platforms, enabling platforms to clamp down on problematic AI-generated content without developing their own tools.
More about the report
Tech Against Terrorism’s intelligent operations archived over 5,000 AI-generated content used by terrorist and violent extremist actors (TVEs). They highlight that this represents a small fraction of the total volume of TVE content identified annually.
The report primarily covers content across Islamist and far-right ideologies, showing a concerning trend of TVEs using generative AI for propaganda.
Key findings and concerns
TVE networks affiliated with Islamic State (IS), Al-Qaeda supporters, and neo-Nazis have been experimenting with generative AI.
- IS tech support group publishing a guide on secure AI content generator use.
- Neo-Nazi messaging channels distributing AI-generated racist and antisemitic imagery.
- Far-right propagandists use AI-generated image tools to create extremist memes.
- Pro-IS users claim to have used AI for transcription and translation of IS propaganda.
- Al-Qaeda-aligned outlets are publishing AI-generated propaganda posters.
Potential risks of generative AI exploitation
According to the report:
- Media spawning: Generating variants of images or videos to bypass detection.
- Automated multilingual translation: Translating text-based propaganda into multiple languages.
- Fully synthetic propaganda: Creating artificial content, such as speeches and images.
- Variant recycling: Repurposing old propaganda into “new” versions.
- Personalized propaganda: Customizing messaging for targeted recruitment.
- Subverting moderation: Designing content to bypass existing moderation techniques.
Opportunities and challenges:
- Proactive solutions and collaborative efforts, like “red-teaming,” are needed to identify and mitigate risks.
- Small-scale instances of AI used by TVE actors signal an emerging threat, necessitating technical solutions and policy-making.
- Extremist groups are likely to continue exploring AI tools to augment propaganda strategies.
- Unofficial violent extremist outlets may exploit AI tools more aggressively due to a lack of resources and original material.
While the current risk of widespread adoption of generative AI by TVEs is low, emerging patterns indicate a rising threat.
Deep fakes and other forms of AI-supported manipulation pose a pressing risk that tech companies are struggling to control.