Effective altruism, long-termism, and politics in OpenAI

November 24, 2023

OpenAI

Recent events at OpenAI have highlighted the ideologies of “effective altruism” and its offshoot, “long-termism.”

But what do these ‘philosophies’ even mean? And how are they affecting AI politics and governance?

OpenAI was initially founded as a non-profit. Then, in 2019, it became a “capped” for-profit, defined as a “partnership between our original Nonprofit and a new capped profit arm.”

Once the company knew what it was sitting on, it wasn’t going to keep away from investors and profitmaking forever, after all.

Since 2019, the role of effective altruism, which underpinned OpenAI’s mission, has faded away.

What is effective altruism?

Effective altruism is a modern philosophy founded not long ago by Oxford University academics like William MacAskill and Toby Ord. 

In a nutshell, it seeks to maximize positive impacts on the planet in a general sense, defined as “using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.”

Its subset, long-termism, focuses on reducing existential risks to ensure the well-being of future generations. This was born from criticisms of short-term, cyclical governments. It argues that current governance doesn’t accommodate the type of systematic, consistent action we need to support our planet. 

OpenAI, founded with lofty goals of developing AI for humanity’s benefit, resonated with these principles. 

However, the company’s commercial expansion and the support for investment, such as the $1 billion from Microsoft, created a conflict between these altruistic goals and practical business needs.

Altruistic ideology at OpenAI

This clash of ideologies reached a breaking point inside OpenAI

On one side is the Silicon Valley brand of techno-capitalism, championing disruptive ideas with massive venture capital. 

On the other is the slower and more cautious approach that has become the de-facto stance among AI tech bosses. What else can they say in the face of national and international pressure to carefully moderate their products? 

For AI to progress, it has to appease those who are scared of it. While there is considerable debate about whether AI truly is dangerous, the point remains that many people are worried.

This dichotomy was epitomized in Altman, who, while pushing OpenAI towards being a tech powerhouse, also warned of the existential threats posed by advanced AI. Again, what choice did he have? 

To reassure, Altman highlighted how OpenAI’s structure protected the company from treading riskier paths, but his reinstatement as CEO puts pay to that. Altman was reinstated by force, regardless of whether you consider it a hearty act of solidarity or not. 

“On paper, the board had the power, that’s how Sam Altman presented it,” says Toby Ord, one of the founders of effective altruism and a member of the Center for the Governance of AI (CAIS).

He continued, “Instead, a very large amount of power seems to reside with Altman personally, and the employees and Microsoft.” 

Board dynamics and challenges

The dismissal of Altman by board members, some of whom were connected to the effective altruism movement, threatens OpenAI’s ability to continue marketing itself as a novel company focused on humanitarian benefits. 

Board members alleged that Altman was not being “candid in his communications,” but most believe he wasn’t taking safety seriously. 

OpenAI’s new board, while bound by the same rules as the old one for now, will not put up the same resistance to Altman.

The resolution of this crisis, with Altman’s return and board reshuffle, leaves unanswered questions about the future direction of the company and its founding principles. OpenAI has been unleashed – Altman will not be removed until he is finished.

However, it will be tough for OpenAI to accelerate with riskier AI developments purely because of this board reshuffle. If OpenAI is too bullish, they might expedite action against them. They’re under close inspection by organizations like the FTC already

How can companies harmonize long-term altruistic philosophies with the realities of technological and commercial progress?

Are the two compatible? Were they ever compatible? 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions