Director of the NSA and US Cyber Command, General Paul Nakasone, announced a new AI security center in an address to The National Press Club.
As AI becomes more integrated into defense, national security, and civilian infrastructure, the NSA increasingly sees the need to address the associated risks.
A recent study conducted by the NSA identified securing AI models from theft and sabotage as a significant challenge to national security.
Nakasone said, “AI will be increasingly consequential for national security in diplomatic, technological, and economic matters for our country and our allies and partners.” The “diplomatic” and “technological” aspects of national security are coming under sharper focus as world powers compete on the AI stage.
“Our adversaries, who have for decades used theft and exploitation of our intellectual property to advance their interests will seek to co-opt our advances in AI and corrupt our application of it,” said Nakasone without explicitly mentioning China.
The new AI security center will fall under the operational oversight of the NSA’s Cybersecurity Collaboration Center. Nakasone explained the new center would focus on “promoting the secure adoption of new AI capabilities across the national security enterprise and the defense industry base.”
AI: Both the cause and solution to the problem
The overt risks of AI as a weapon are some of the clear and obvious areas that the NSA is focusing on.
The NSA Director said that his department had not yet detected efforts by China or Russia to influence the upcoming US elections, but based on past experience, the risk remained.
A recent report highlighted an uptick in cybersecurity attacks from Chinese cyber actors known as BlackTech. As AI improves so will the sophistication and scale of these attacks.
Using AI to process large amounts of data and inform decisions to identify and mitigate these risks is an attractive prospect. But as AI is incorporated into the operations of national security organizations like the NSA it also opens them up to new risks.
If we don’t understand exactly how an AI model arrives at a recommendation, can we trust it? If the model is prone to bias or hallucination, could the underlying dataset be corrupted by bad actors?
The CIA announced that it was developing its own AI version of ChatGPT and DARPA is researching how to use AI to make battlefield decisions. Imagine what could happen if these AI tools could be influenced to give bad advice.
The NSA’s security center aims to protect these institutions from having their technology stolen or sabotaged by the US’s adversaries. The Director also emphasized that AI would not be automating national security decisions.
“AI helps us, But our decisions are made by humans,” said Nakasone. But the better AI gets, the more those humans will rely on it to make their decisions.
And the more national security relies on AI, the more vulnerable society becomes to an AI glitch or sabotage.