AI-powered ‘synthetic cancer’ worm represents a new frontier in cyber threats

July 17, 2024

  • Researchers built a worm that can harness AI tools to self-replicate and spread
  • It interacts with GPT-4 to replicate its code and launch phishing attacks
  • Researchers built a different variant of an 'AI worm' earlier this year
AI cybersecurity

Researchers have unveiled a new type of computer virus that harnesses the power of large language models (LLMs) to evade detection and propagate itself. 

This “synthetic cancer,” as its creators dub it, portrays what could be a new era in malware.

David Zollikofer from ETH Zurich and Benjamin Zimmerman from Ohio State University developed this proof-of-concept malware as part of their submission to the Swiss AI Safety Prize

Their creation, detailed in a pre-print paper titled “Synthetic Cancer – Augmenting Worms with LLMs,” demonstrates the potential for AI to be exploited to create new, highly sophisticated cyber attacks. 

Here’s a blow-by-blow of how it works:

  1. Installation: The malware is initially delivered via email attachment. Once executed, it can download additional files and potentially encrypt the user’s data.
  2. Replication: The interesting stage leverages GPT-4 or similar LLMs. The worm can interact with these AI models in two ways: a) Through API calls to cloud-based services like OpenAI’s GPT-4. Or b) By running a local LLM (which could be common in future devices).
  3. GPT-4/LLM usage: Zollikofer explained to New Scientist, “We ask ChatGPT to rewrite the file, keeping the semantic structure intact, but changing the way variables are named and changing the logic a bit.” The LLM then generates a new version of the code with altered variable names, restructured logic, and potentially even different coding styles, all while maintaining the original functionality. 
  4. Spreading: The worm scans the victim’s Outlook email history and feeds this context to the AI. The LLM then generates contextually relevant email replies, complete with social engineering tactics designed to encourage recipients to open an attached copy of the worm. 

As we can see, the virus uses AI in two days: to create code to self-replicate and to write phishing content to continue spreading.

Moreover, the worm’s ability to rewrite its own code presents a particularly challenging problem for cybersecurity experts, as it could render traditional signature-based antivirus solutions obsolete.

“The attack side has some advantages right now, because there’s been more research into that,” Zollikofer notes. 

Cornell Tech researchers reported a similar AI-powered worm in March. Ben Nassi and his team created a worm that could attack AI-powered email assistants, steal sensitive data, and propagate to other systems. 

Nassi’s team targeted email assistants powered by OpenAI’s GPT-4, Google’s Gemini Pro, and the open-source model LLaVA.

“It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential,” Nassi told Wired, underlining the potential for massive data breaches.

While Nassi’s worm primarily targeted AI assistants, Zollikofer and Zimmerman’s creation goes a step further by directly manipulating the malware’s code and writing the phishing emails autonomously.

AI cybersecurity fears are brewing

This has been a tumultuous few days for cyber-security in an AI context, with Disney suffering a data breach at the hands of a hacktivist group.

The group said they were fighting against tech companies to represent creators whose copyrighted work had been stolen or its value otherwise diminished.

Not long ago, OpenAI was exposed for having suffered a breach in 2023, which they tried to keep under wraps. And not long ago, OpenAI and Microsoft released a report admitting that hacker groups from Russia, North Korea, and China had been using their AI tools to craft cyber attack strategies. 

Study authors Zollikofer and Zimmerman have implemented several safeguards to prevent misuse, including not sharing the code publicly and deliberately leaving specific details vague in their paper.

“We are fully aware that this paper presents a malware type with great potential for abuse,” the researchers state in their disclosure. “We are publishing this in good faith and in an effort to raise awareness.”

Meanwhile, Nassi and his colleagues predicted that AI worms could start spreading in the wild “in the next few years” and “will trigger significant and undesired outcomes.” 

Given the rapid advancements we’ve witnessed in just four months, this timeline seems not just plausible, but potentially conservative.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions