Researchers join open letter advocating for independent AI evaluations

March 7, 2024

Over 100 leading AI experts issued an open letter demanding that companies behind generative AI technologies, like OpenAI, Meta, and others, open their doors to independent testing. 

Their message is clear: AI developers’ terms and conditions are curbing independent research efforts into AI tool safety. 

Co-signees feature leading experts such as Stanford’s Percy Liang, Pulitzer Prize-winner Julia Angwin, Stanford Internet Observatory’s Renée DiResta, Mozilla Fellow Deb Raji, ex-European Parliament member Marietje Schaake, and Suresh Venkatasubramanian from Brown University.

Researchers argue that the lessons from the social media era, when independent research was often marginalized, should not be repeated.

To combat this risk, they ask that OpenAI, Meta, Anthropic, Google, Midjourney, and others create a legal and technical safe space for researchers to evaluate AI products without fearing being sued or banned.

The letter says, “While companies’ terms of service deter malicious use, they also offer no exemption for independent good faith research, leaving researchers at risk of account suspension or even legal reprisal.”

AI tools impose strict usage policies to prevent them from being manipulated into bypassing their guardrails. For example, OpenAI recently branded investigative efforts by the New York Times as “hacking,” and Meta threatened to withdraw licenses over intellectual property disputes. 

A recent study probed MidJourney to reveal numerous instances of copyright violation, which would have been against the company’s T&Cs.

The problem is that since AI tools are largely unpredictable under the hood, they depend on people using them in a specific way to remain ‘safe.’ 

However, those same policies make it tough for researchers to probe and understand models. 

The letter, published on MIT’s website, makes two pleas:

1. “First, a legal safe harbor would indemnify good faith independent AI safety, security, and trustworthiness research, provided it is conducted in accordance with well-established vulnerability disclosure rules.”

2. “Second, companies should commit to more equitable access, by using independent reviewers to moderate researchers’ evaluation applications, which would protect rule-abiding safety research from counterproductive account suspensions, and mitigate the concern of companies selecting their own evaluators.”

The letter also introduces a policy proposal, co-drafted by some signatories, which suggests modifications in the companies’ terms of service to accommodate academic and safety research.

This contributes to broadening consensus about the risks associated with generative AI, including bias, copyright infringement, and the creation of non-consensual intimate imagery. 

By advocating for a “safe harbor” for independent evaluation, these experts are championing the cause of public interest, aiming to create an ecosystem where AI technologies can be developed and deployed responsibly, with the well-being of society at the forefront.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions