Former OpenAI employees publish ‘Right to Warn’ open letter

June 5, 2024

  • In an open letter current and former OpenAI and Google employees call for freedom to report AI risks
  • The letter is titled “A right to warn about advanced artificial intelligence”
  • The letter says AI companies are disincentivized to report risks, employees are punished when they do

A group of former and current OpenAI and Google employees are calling AI companies out on what they say is a dangerous culture of secrecy surrounding AI risks.

The letter titled “A right to warn about advanced artificial intelligence” states that AI companies have strong financial incentives to avoid effective oversight of potential AI risks.

Besides being reckless by focusing on financial objectives instead of safety, the letter says that companies use punitive confidentiality agreements to actively discourage employees from raising concerns.

The signatories are all former OpenAI and Google employees, with Neel Nanda, the only one still working at Google. The letter was also endorsed by leading AI minds Yoshua Bengio, Geoffrey Hinton, and Stuart Russell.

As evidence of the concern over calling their former employers out, 6 of the signatories were not willing to disclose their names in the letter.

Former OpenAI researchers Daniel Kokotajlo and William Saunders, who also signed the letter, left the company earlier this year.

Kokotajlo was on the governance team, and Saunders worked on OpenAI’s Superalignment team which was disbanded last month when Ilya Sutskever and Jan Leikealso also left over safety concerns.

Kokotajlo explained his reason for leaving on a forum saying he doesn’t think OpenAI will “behave responsibly around the time of AGI.”

A call to action

The letter calls for a greater commitment from AI companies in the absence of regulation governing AI risks that the public doesn’t know about.

The letter says, “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”

The letter calls for AI companies to commit to four principles. In short, they want companies to:

  • Not enter into or enforce agreements that prohibit criticism of the company over safety concerns nor hold back financial benefits due to the employee. (ahem, OpenAI)
  • Facilitate an anonymous process for employees to raise risk-related concerns to the company’s board or other regulatory organizations.
  • To support a culture of open criticism allowing employees to make risk-related concerns public while not revealing intellectual property.
  • To not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.

Several of the names on the list of signatories consider themselves effective altruists. From their posts and comments it’s clear people like Daniel Kokotajlo (Less Wrong) and William Saunders (AI Alignment Forum) believe things could end very badly if AI risks aren’t managed.

But these aren’t doomsayer trolls on a forum calling out from the sidelines. These are leading intellects that companies like OpenAI and Google saw fit to employ to create the tech they now fear.

And now they’re saying, ‘We’ve seen stuff that scares us. We want to warn people, but we’re not allowed to.’

You can read the letter here.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions