Google Cloud released its cybersecurity and cybercrime forecast for 2024 and the threat from AI is at the top of its list.
Scammers have used phishing emails and SMS messages for a long time but the report says that large language models (LLM) are going to make these harder to spot.
Previously your suspicions may have been raised by the poor grammar and spelling the ‘Nigerian prince’ used in his email about a great financial opportunity he wanted to share with you.
A scammer can now easily run his pitch through an LLM to translate and polish it before it lands in your inbox. The forecast also expects scammers will use legitimate content to prompt an LLM and then generate modified versions that look and feel like the original.
Generative AI is also expected to enable scammers to move from traditional email and SMS tactics, to AI-generated voice and video scams.
LLMs and gen AI will enable scammers to do their work at scale and also make their messaging a lot more customized for personalized attacks.
The forecast also noted that as attackers use fake news to scam people, society’s trust in all sources of news and information will be eroded.
What will cybersecurity look like in 2024?
Get your copy of the Google Cloud Cybersecurity Forecast 2024 report for insights on attacker tactics and operations in the coming year → https://t.co/APJfNoEQNC pic.twitter.com/1Kpr04nsUR
— Google Cloud (@googlecloud) November 8, 2023
AI cybercrime as a service
Earlier this year we wrote about FraudGPT, a chatbot ostensibly made specifically for hacking and scamming. There was a fair amount of skepticism about how useful it would be to aid criminals, but Google expects we’ll see more of these tools in the future.
The forecast explained that we’re likely to see AI cybercrime tools offered as a service. The report said, “They will be offered in underground forums as a paid service, and used for various purposes such as phishing campaigns and spreading disinformation.”
Cybercrime usually targets individuals or corporations but the forecast sees objectives other than financial gain on the horizon.
With the US presidential elections happening next year, the report said, “We will see nation states and other threat actors engage in a variety of cyber activity, including espionage and influence operations targeting the electoral systems, impersonation of candidates on social media, and information operations designed to target the voters themselves.”
The Big Four
Google’s forecast lists China, Russia, North Korea, and Iran as the main cyber threats to the rest of the world.
Besides espionage, it also says China and Russia will continue to target African countries with disinformation, especially countries with rare earth deposits which are essential in many high-tech products.
The forecast pointed to the 2024 Paris Olympics as a specific target that criminals and anti-Europe actors would target. This seems to be happening already with a deepfake of Tom Cruise shown to be criticising the IOC.
In what purports to be a Netflix promo, Tom Cruise appears to say that corrupt officials are “slowly and painfully destroying the Olympic sports that have existed for thousands of years.”
An AI “Tom Cruise” joins the fake news barrage targeting the Olympics.https://t.co/YV2Hkp1Hdp
— POLITICOEurope (@POLITICOEurope) November 10, 2023
With Russian athletes disallowed from competing under their country’s flag, it’s a fair bet that pro-Kremlin actors are the source of this deepfake.
Google’s forecast makes for grim reading and it seems like evading AI-powered cybercrime completely may well be Mission Impossible.