Microsoft has reported the misuse of its AI technologies by state-sponsored hackers from Russia, China, and Iran.
Microsoft collaborated with OpenAI to observe activities by groups linked to Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments, using advanced AI tools to refine their hacking strategies and craft convincing deceptive messages.
As part of Microsoft and OpenAI’s research, they disrupted the activities of five state-backed malicious groups by terminating their access to OpenAI’s services.
These include two China-affiliated actors, Charcoal Typhoon and Salmon Typhoon; Crimson Sandstorm from Iran; Emerald Sleet, linked to North Korea; and Forest Blizzard, associated with Russia.
These groups were exploiting OpenAI’s services for tasks such as querying open-source information, translation, identifying coding errors, and performing basic coding tasks. Specifically:
- Charcoal Typhoon was using OpenAI’s services for researching companies and cybersecurity tools, debugging code, generating scripts, and creating phishing content.
- Salmon Typhoon utilized OpenAI’s services for translating technical documents, gathering intelligence on various agencies, assisting with coding, and researching stealth techniques.
- Crimson Sandstorm engaged OpenAI’s services for scripting support in app and web development, crafting spear-phishing content, and exploring evasion methods.
- Emerald Sleet leveraged OpenAI’s services to identify defense experts and organizations, understand vulnerabilities, assist with basic scripting, and prepare phishing materials.
- Forest Blizzard used OpenAI’s services mainly for open-source research into satellite and radar technologies and for scripting assistance.
The fact international threat actors are using OpenAI’s tech embodies AI’s dual-use conundrum.
Threat actors are deliberately choosing OpenAI’s models rather than developing their own. It’s cheaper, quicker, and more effective. Sovereign open-source tools are probably in the pipeline, though.
Tom Burt, Microsoft’s Vice President for Customer Security, spoke of the findings, “Independent of whether there’s any violation of the law or any violation of terms of service, we just don’t want those actors that we’ve identified – that we track and know are threat actors of various kinds – to have access to this technology.”
Microsoft is banning any AI application or workload hosted on Azure for these state-backed groups. The Biden administration recently requested that tech firms report certain foreign cloud technology users.
China responded via US embassy spokesperson Liu Pengyu, who denounced the accusations as unfounded and advocated for the responsible deployment of AI technologies to benefit humanity.
China has been collaborating with the US on AI safety behind closed doors, but perhaps out of sheer necessity rather than want or desire.
This is the latest in a series of cyber crime-themed reports from Microsoft in recent months, including one last year that discussed AI-generated propaganda emanating from China and North Korea.
However, it must be said that AI threats are everywhere. Deep fake fraud is rising sharply in the US, UK, and other Western and European nations, the same as in the East.
Microsoft is now tracking over 300 unique threat actors, including nation-state actors and ransomware groups, leveraging AI to enhance their defenses and disrupt malicious activities.
AI cyber threats are on the rise, and the fact of the matter is, they’re international and hard to contain.