A report from the think tank RAND has concluded that current LLMs do not meaningfully increase the risk of a biological attack by a non-state actor.
In October last year, the same group of researchers released a report that raised the possibility that LLMs “could assist in the planning and execution of a biological attack.” That report did note that the risk of this happening in practice needed further research.
The October report titled “The Operational Risks of AI in Large-Scale Biological Attacks” was criticized by some on the e/acc side of the ‘AI is dangerous’ aisle. Chief AI Scientist at Meta Yann LeCun said the report oversimplified what it takes to create a bio-weapon.
Perhaps an LLM can save you a bit of time, over searching for bioweapon building instructions on a search engine.
But then, do you know how to do the hard lab work that’s required? https://t.co/QnnZqUOP6X— Yann LeCun (@ylecun) November 2, 2023
We’re OK for now
The latest RAND report titled “Current Artificial Intelligence Does Not Meaningfully Increase Risk of a Biological Weapons Attack” confirmed LeCun’s assessment and took some wind out of the sails of those on the effective altruist side of the aisle.
The researchers, led by Senior RAND Engineer Christopher A. Mouton, ran a red-teaming exercise to see how a malicious non-state actor might use an LLM to build a biological weapon.
Participants were tasked with planning a biological attack with some having access to the internet and an LLM, while others only had access to the internet. They didn’t name the LLMs that were used.
The result was that there was “no statistically significant difference in the viability of plans generated with or without LLM assistance.” In other words, ChatGPT isn’t going to tell you anything you can’t already Google.
In fact, their results showed that the plans developed by the teams using an LLM were marginally less viable than those that only had access to the internet. It’s comforting that no team was able to come up with a bio-attack plan that was actually workable.
As AI models get more intelligent, that could change. Mouton said, “Just because today’s LLMs aren’t able to close the knowledge gap needed to facilitate biological weapons attack planning doesn’t preclude the possibility that they may be able to in the future.”
Can emerging AI tools—specifically, large language models (LLMs)—be used to launch a large-scale biological attack?
A new RAND study explores this question through a red-team exercise. Here are the results. 🧵 https://t.co/foFmWltIuQ
— RAND (@RANDCorporation) January 25, 2024
More research needed
The report acknowledged that the research didn’t determine how big that knowledge gap was. Mouton said further research into this was important “because AI technology is available to everyone—including dangerous non-state actors—and it’s advancing faster than governments can keep pace.”
People pushing for the development of AGI say the potential for a superintelligent AI is real and achievable. But a lot of those people will be the ones now saying, ‘I told you so’ and scoff at the idea that AI could pose a risk like the one RAND investigated.
We probably don’t need to be running around shouting ‘The sky is falling!’, but to say that an artificial intelligence smarter than us won’t potentially enable bad actors is also ignorant. And dangerous.