OpenAI’s superalignment meltdown: can any trust be salvaged?

May 18, 2024

  • OpenAI attracts controversy as its superalignment safety teams falls apart
  • This includes the departure of Jan Leike and Ilya Sutskever
  • CEO Sam Altman may struggle to control company trust and morale
AI news

Ilya Sutskever and Jan Leike from OpenAI‘s “superalignment” team resigned this week, casting a shadow over the company’s commitment to responsible AI development under CEO Sam Altman.

Leike, in particular, did not mince words. “Over the past years, safety culture and processes have taken a backseat to shiny products,” he declared in a parting shot, confirming the unease of those observing OpenAI‘s pursuit of advanced AI.

Sutskever and Leike are the latest entry in an ever-lengthening list of high-profile shake-ups at OpenAI.

Since November 2023, when Altman narrowly survived a boardroom coup attempt, at least five other key members of the superalignment team have either quit or been forced out:

  • Daniel Kokotajlo, who joined OpenAI in 2022 hoping to steer the company toward responsible artificial general intelligence (AGI) development – highly capable AI that meets or excels our own cognition – quit in April 2024 after losing faith in leadership’s ability to “responsibly handle AGI.”
  • Leopold Aschenbrenner and Pavel Izmailov, superalignment team members, were allegedly fired last month for “leaking” information, though OpenAI has provided no evidence of wrongdoing. Insiders speculate they were targeted for being Sutskever’s allies.
  • Cullen O’Keefe, another safety researcher, departed in April.
  • William Saunders resigned in February but is apparently bound by a non-disparagement agreement from discussing his reasons. 

Amid these developments, OpenAI has allegedly threatened to remove employees’ equity rights if they criticize the company or Altman himself, according to Vox

That’s made it tough to truly understand the issue at OpenAI, but evidence suggests that safety and alignment initiatives are failing, if they were ever sincere in the first place.

OpenAI’s controversial plot thickens

OpenAI, founded in 2015 by Elon Musk and Sam Altman, was thoroughly committed to open-source research and responsible AI development.

However, as the company’s vision has expanded in recent years, it’s found itself retreating behind closed doors. In 2019, OpenAI officially transitioned from a non-profit research lab to a “capped-profit” entity, fueling concerns about a shift toward commercialization over transparency.

Since then, OpenAI has guarded its research and models with iron-clad non-disclosure agreements and the threat of legal action against any employees who dare to speak out. 

Other key controversies in the startup’s short history include:

  • In 2019, OpenAI stunned the AI community by transitioning from a non-profit research lab to a “capped-profit” company, marking an affirmative departure from its founding principles. 
  • Last year, reports emerged of closed-door meetings between OpenAI and military and defense organizations.
  • Altman‘s erratic tweets have raised eyebrows, from musings about AI-powered global governance to admitting existential-level risk in a way that portrays himself as the pilot of a ship he already cannot steer.
  • In the most serious blow to Altman‘s leadership to date, Sutskever himself was part of a failed boardroom coup in November 2023 that sought to oust the CEO. Altman managed to cling to power, showing that he’s well and truly bonded to the company in such a way that is tricky to pry apart, even by the board itself. 

While boardroom dramas and found crises aren’t uncommon in Silicon Valley, OpenAI‘s work, by their own admission, could be critical for global society.

The public, regulators, and governments want consistent, controversy-free governance at OpenAI, but the startup’s short, turbulent history suggests anything but.

OpenAI is becoming the antihero of generative AI

While armchair diagnosis and character assassination of Altman are irresponsible, his reported history of manipulation and pursuit of personal visions at the sacrifice of collaborators and public trust raise uncomfortable questions.

Reflecting this, conversations surrounding Altman and his company have become increasingly vicious across X, Reddit, and the Y Combinator forum.

While tech bosses are often polarizing, they usually win followings, as Elon Musk demonstrates among the more provocative types. Others, like Microsoft CEO Satya Nadella, win respect for their corporate strategy and controlled, mature leadership style.

Let’s also recognize how other AI startups, like Anthropic, manage to keep a fairly low profile despite their high achievements in the generative AI industry. OpenAI, on the other hand, maintains an intense, controversial gravitas that keeps it in the public eye, serving no benefit to its image, nor the image of generative AI as a whole. 

In the end, we should say it how it is. OpenAI‘s pattern of secrecy has contributed to the sense that it’s no longer a good-faith actor in AI.

It leaves the public wondering whether generative AI could erode society rather than help it. It sends a message that pursuing AGI is a closed-door affair, a game played by tech elites with little regard for the wider implications.

The moral licensing of the tech industry

Moral licensing has long plagued the tech industry, where the current corporate mission’s proposed nobility is used to justify ethical compromises. 

From Facebook’s “move fast and break things” mantra to Google’s “don’t be evil” slogan, tech giants have repeatedly invoked the language of progress and social good while engaging in questionable practices.

OpenAI’s mission to research and develop AGI “for the benefit of all humanity” invites perhaps the ultimate form of moral licensing.

The promise of a technology that could solve the world’s greatest challenges and usher in an era of unprecedented prosperity is a seductive one. It appeals to our deepest hopes and dreams, tapping into the desire to leave a lasting, positive impact on the world.

But therein lies the danger. When the stakes are so high and the potential rewards so great, it becomes all too easy to justify cutting corners, skirting ethical boundaries, and dismissing critique in the name of a ‘greater good’ no individual or small group can define, not even with all the funding and research in the world.

This is the trap that OpenAI risks falling into. By positioning itself as the creator of a technology that will benefit all of humanity, the company has essentially granted itself a blank check to pursue its vision by any means necessary.

So, what can we do about it all? Well, talk is cheap. Robust governance, continuous progressive dialogue, and sustained pressure to improve industry practices are key. 

As for OpenAI itself, as public pressure and media critique of OpenAI grow, Altman’s position could become less tenable. 

If he were to leave or be ousted, we’d have to hope that something positive fills the immense vacuum he’d leave behind. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions