Congress concerned about RAND’s influence on AI safety body

December 27, 2023

The executive order that President Biden signed in October tasked the National Institute of Standards and Technology (NIST) with researching how to test and analyze AI model safety. The RAND Corporation’s influence on the NIST is coming under scrutiny.

RAND is an influential think tank with deep ties to tech billionaires and AI role players aligned with the “effective altruism” movement. It recently came to light that RAND played an important advisory role in shaping Biden’s executive order on AI safety.

The US Congress Committee on Science, Space, and Technology has voiced its concern in a letter over how the NIST is carrying out its mandate and how research from groups like RAND may influence its work.

The letter, addressed to the NIST, said that the committee was “concerned about how the AISI will fund outside organizations and the transparency of those awards.”

Who is advising the AISI?

The NIST established the Artificial Intelligence Safety Institute (AISI) and will likely outsource a lot of the research it needs to do on AI safety. Who will it farm the research out to? The NIST isn’t saying, but two groups have reportedly been engaged by the institute.

The committee’s letter didn’t mention RAND by name but a reference to a report from the think tank makes it clear they’re concerned that RAND may continue to exert its influence on AI regulation.

The letter stated, “​​Organizations routinely point to significant speculative benefits or risks of AI systems but fail to provide evidence of their claims, produce nonreproducible research, hide behind secrecy, use evaluation methods that lack construct validity, or cite research that has failed to go through robust review processes, such as academic peer review.”

To illustrate its point, the paper links to research by RAND Corp. titled: “The Operational Risks of AI in Large-Scale Biological Attacks.” That report concluded that LLMs “could assist in the planning and execution of a biological attack.”

How will this affect AI regulation?

The motivation behind the unsubtle AI fearmongering in RAND’s research becomes clearer when you follow the money. RAND received $15.5 million in grants from Open Philanthropy, a funding organization focused on supporting effective altruism causes.

Proponents of effective altruism are among the loudest voices calling for a halt or slowdown in the development of AI. If RAND is one of the two organizations reportedly tasked with doing research for the AISI, then we can expect tighter AI regulation in the near future.

The committee’s letter said, “In implementing the AISI, we expect NIST to hold the recipients of federal research funding for AI safety research to the same rigorous guidelines of scientific and methodological quality that characterize the broader federal research enterprise.”

In other words, just because someone like RAND says that AI is an existential risk, doesn’t make it so.

Will the NIST invite input from AI accelerationists like Meta’s VP and Chief AI Scientist Yann LeCun? In a recent interview with Wired, LeCun said, “Whenever technology progresses, you can’t stop the bad guys from having access to it. Then it’s my good AI against your bad AI. The way to stay ahead is to progress faster. The way to progress faster is to open the research, so the larger community contributes to it.”

The NIST’s work is strongly focused on making accurate measurements. Who they choose as advisors will inevitably tip the scales of AI regulation.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions