The UK’s National Cyber Security Centre (NCSC) has recently released a pivotal report, providing a thorough assessment of how AI is poised to influence the cyber threat landscape over the next two years.
This report follows the discussions held at the Bletchley AI Safety Summit in November 2023, where AI’s “dual nature” of AI as both a benefit and a cybersecurity risk was a key focus.
While AI advancements present new opportunities for cyber defense, they also pave the way for more sophisticated and harder-to-detect cyber attacks.
The report suggests a future where the cybersecurity landscape becomes increasingly complex, with AI playing a central role in both defending against and facilitating cyber threats.
Key risks from the report include:
- Intensified volume and impact of cyber attacks: AI is set to increase the frequency and severity of cyber attacks.
- Lower barriers for entry-level cyber criminals: AI technologies are making it easier for amateur and novice cybercriminals to execute effective cyber-attacks.
- Widespread availability of AI-enabled cyber capabilities: The increasing accessibility of AI tools in criminal and commercial markets is likely to arm cyber threat actors with new approaches.
- Increased challenges in cyber resilience: AI will complicate the distinction between legitimate and malicious communications, amplifying the challenges in cybersecurity management.
- Social engineering: AI enhances the capability of threat actors in social engineering by generating authentic-looking lure documents and communications necessary for phishing scams.
- Malware and exploit development: AI is likely to assist in developing malware and exploits, making existing techniques more efficient.
- Reconnaissance: AI will enhance reconnaissance activities, helping threat actors identify high-value assets for examination and exfiltration (an unauthorized data transfer) more quickly.
Lindy Cameron, Chief Executive of the NCSC, explained the report: “The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.”
This could lead to a surge in cyber attacks, especially ransomware, from less experienced individuals or groups. However, highly capable state threat actors remain best positioned to utilize AI in developing advanced malware.
We’ve already seen the potential of fraud-oriented generative AI tools like FraudGPT and WormGPT.
Like the NCSC, previous reports in a similar vein have also highlighted AI’s democratization and scaling of cyber attacks, including OpenAI, which recently announced its plans for clamping down on election misinformation and deep fakes.