The NIST publishes paper on four possible types of generative AI attacks

January 6, 2024

NIST

The US National Institute of Standards and Technology (NIST) has raised concerns about the security of predictive and generative AI systems.

According to Apostol Vassilev, a computer scientist at NIST, despite advancements in security, these technologies remain vulnerable to a variety of attacks.

In a collaborative paper titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” Vassilev, along with colleagues from Northeastern University and Robust Intelligence, categorize the security risks posed by AI systems. 

Vassilev stated, “Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences.” 

He also warned against any company that claims to offer ‘fully secure AI.’

This is part of the NIST Trustworthy and Responsible AI initiative, aligning with US government goals for AI safety. It examines adversarial machine learning techniques, focusing on four main security concerns: evasion, poisoning, privacy, and abuse attacks.

Evasion attacks happen post-deployment, altering inputs to confuse AI systems. For example, modifying stop signs to be misread by autonomous vehicles as speed limit signs, or creating deceptive lane markings to lead vehicles astray.

In poisoning attacks, corrupt data is introduced during training. This could involve embedding frequent inappropriate language in training datasets, leading a chatbot to adopt this language in customer interactions.

Privacy attacks aim to extract sensitive information about the AI or its training data, often through reverse-engineering methods. This can involve using a chatbot’s responses to discern its training sources and weaknesses.

Abuse attacks manipulate legitimate sources, like webpages, feeding AI systems false information to alter their functioning. This differs from poisoning attacks, which corrupt the training process itself.

Evasion attacks involve creating adversarial examples to deceive AI systems during deployment, like misidentifying stop signs in autonomous vehicles. 

Alina Oprea from Northeastern University, who was involved in the study, explained, “Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities.”

NIST criticized for links to AI think-tank

Separately, concerns have been raised over a planned AI research partnership between NIST and the RAND Corp.

RAND, known for its ties to tech billionaires and the effective altruism movement, played a significant advisory role in shaping the AI safety executive order

Members of the House Committee on Science, Space, and Technology, including Frank Lucas and Zoe Lofgren, criticized the lack of transparency in this partnership. 

The committee’s concerns are twofold: First, they are questioning why there wasn’t a competitive process for selecting RAND for this AI safety research. 

Usually, when government agencies like NIST provide research grants, they open up the opportunity for different organizations to apply, ensuring a fair selection process. But in this case, it seems RAND was chosen without such a process.

Second, there is some unease about RAND’s focus on AI research. RAND has been involved in AI and biosecurity studies and has recently received significant funding for this work from sources closely linked to the tech industry. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions