Harry Potter and the Effective Altruists running OpenAI

November 21, 2023

The decision to fire Sam Altam may have been influenced by the effective altruistic ideals shared by OpenAI’s board members. Interim CEO Emmett Shear shares some of their fears about AI.

In broad terms, the concept of Effective Altruism (EA) is to take on the world’s most pressing problems and take logical action to maximize positive outcomes for humanity.

When applied to the AI space, many of the proponents of EA are sounding the alarm on the breakneck pace of development and potential existential risks to humanity.

On the other side of the spectrum are the Effective Accelerationists (e/acc) who say AI won’t kill us all and we should move faster.

At least half of the current OpenAI board members seem to fall squarely on the EA side and their AI fears may have been behind recent developments at the company.

Helen Toner used to head Effective Altruism Melbourne, and ​​Tasha McCauley sits on the Effective Ventures board which runs the Centre For Effective Altruism.

Enter Emmett Shear

In a recent podcast Emmett Shear, OpenAI’s third CEO in almost as many days, expressed support for the views of the EA prophets of AI doom.

Shear is also a fan of AI researcher Eliezer Yudkowsky, who wrote a Harry Potter fan fiction novel that champions rational and scientific thinking. Harry Potter and the Methods of Rationality (HPMOR) is popular among the EA community and is used in its recruitment efforts.

In chapter 104 of the 660,000 word novel, Yudowsky writes about a Seeker called Emmett Shear falling off a broomstick in a Quidditch match.

Emmett Shear gets a mention in a Harry Potter fan fiction novel. Source: Less Wrong

If Shear shares the more extreme views that Yudowsky has regarding AI safety then we can expect more pumping of the brakes at OpenAI.

Earlier this year, tech leaders like Elon Musk and Emad Mostaque joined thousands in signing an open letter calling for a 6-month pause in AI development. Yudowsky declined to sign the letter saying that it was “understating the seriousness of the situation and asking for too little to solve it.”

In a letter published in Time Yudowsky said that AI development should stop completely or else “literally everyone on Earth will die.”

In his letter, Yudowsky was unambiguous about his views saying, “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

We know that Shear agrees with some of these sentiments. In the clip above Shear says, “I know that Eliezer thinks that like, we’re all doomed for sure. I buy his doom argument. I buy the chain and the logic.”

Will EA save us and doom OpenAI?

Most people probably first heard about effective altruism when disgraced FTX founder Sam Bankman Fried came on the scene. He claimed the principles of EA were behind his desire to make a lot of money to do a lot of good.

His example shows that people who have a strong desire to make a positive difference are not immune to making very stupid decisions.

If the OpenAI board continues to prioritize the principles of EA then it’s hard to see how Sam Altman could pursue AGI and profitability if and when he’s reappointed.

If the board moderates its views and listens to e/acc advocates like computer scientist Yann LeCun, then OpenAI may still be a viable project.

Mark Zuckerberg famously encouraged his engineers to “Move fast and break things”. Altman may share that view on AI development but the current OpenAI leadership clearly don’t.

Join The Future


Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.


Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions