RAND report says LLMs don’t increase risk of biological attacks

  • RAND researchers concluded that AI does not meaningfully increase the risk of a bio-weapons attack
  • Their red-team exercise showed current LLMs deliver information that is available on the internet
  • A previous RAND report said malicious non-state actors could use AI to make a bio-weapon

A report from the think tank RAND has concluded that current LLMs do not meaningfully increase the risk of a biological attack by a non-state actor.

In October last year, the same group of researchers released a report that raised the possibility that LLMs “could assist in the planning and execution of a biological attack.” That report did note that the risk of this happening in practice needed further research.

The October report titled “The Operational Risks of AI in Large-Scale Biological Attacks” was criticized by some on the e/acc side of the ‘AI is dangerous’ aisle. Chief AI Scientist at Meta Yann LeCun said the report oversimplified what it takes to create a bio-weapon.

We’re OK for now

The latest RAND report titled “Current Artificial Intelligence Does Not Meaningfully Increase Risk of a Biological Weapons Attack” confirmed LeCun’s assessment and took some wind out of the sails of those on the effective altruist side of the aisle.

The researchers, led by Senior RAND Engineer Christopher A. Mouton, ran a red-teaming exercise to see how a malicious non-state actor might use an LLM to build a biological weapon.

Participants were tasked with planning a biological attack with some having access to the internet and an LLM, while others only had access to the internet. They didn’t name the LLMs that were used.

The result was that there was “no statistically significant difference in the viability of plans generated with or without LLM assistance.” In other words, ChatGPT isn’t going to tell you anything you can’t already Google.

In fact, their results showed that the plans developed by the teams using an LLM were marginally less viable than those that only had access to the internet. It’s comforting that no team was able to come up with a bio-attack plan that was actually workable.

As AI models get more intelligent, that could change. Mouton said, “Just because today’s LLMs aren’t able to close the knowledge gap needed to facilitate biological weapons attack planning doesn’t preclude the possibility that they may be able to in the future.”

More research needed

The report acknowledged that the research didn’t determine how big that knowledge gap was. Mouton said further research into this was important “because AI technology is available to everyone—including dangerous non-state actors—and it’s advancing faster than governments can keep pace.”

People pushing for the development of AGI say the potential for a superintelligent AI is real and achievable. But a lot of those people will be the ones now saying, ‘I told you so’ and scoff at the idea that AI could pose a risk like the one RAND investigated.

We probably don’t need to be running around shouting ‘The sky is falling!’, but to say that an artificial intelligence smarter than us won’t potentially enable bad actors is also ignorant. And dangerous.

© 2023 Intelliquence Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions