AI think-tank RAND played a key role in drafting Biden’s Executive Order

December 16, 2023

The RAND Corporation, a think tank with deep ties to tech billionaires’ funding networks, notably through Open Philanthropy, played a critical role in drafting President Joe Biden’s executive order on AI. 

According to Politico, this order, heavily influenced by effective altruism, a philosophy advocating a data-driven approach to philanthropy, introduced comprehensive AI reporting requirements.

RAND’s involvement has raised eyebrows due to its significant funding from groups like Open Philanthropy, linked to tech leaders such as Dustin Moskovitz. According to RAND spokesperson Jeffrey Hiday, RAND exists to “[conduct] research and analysis on critical topics of the day, [and] then [share] that research, analysis and expertise with policymakers.”

This involved extensive consulting on the recent Executive Order, including drafting the final documents. 

Earlier this year, RAND Corporation received over $15 million in discretionary grants from Open Philanthropy, earmarked for AI and biosecurity projects. 

Open Philanthropy, known for its effective altruism approach, maintains both personal and financial connections with AI enterprises such as Anthropic and OpenAI. 

Furthermore, leading figures at RAND are intertwined in the corporate frameworks of these AI companies in what remains a relatively compact industry at the highest echelons, at least in the US. 

Critics argue that the think tank’s alignment with effective altruism might skew its research focus, overshadowing immediate AI concerns like racial bias or copyright infringement.

The dynamics at RAND also reflect a broader trend in which effective altruism is increasingly shaping AI policy – at least entering into its narrative. This movement, championed by controversial figures like Sam Bankman-Fried, advocates for addressing long-term existential risks, including those posed by advanced AI, such as the development of bioweapons. 

However, this focus has been criticized for potentially serving top tech companies’ interests by deflecting attention from existing AI harms.

In essence, effective altruism risks postponing immediate, practical action in favor of more hypothetical, long-term plans. 

OpenAI’s internal struggle: altruism vs. commercialization

OpenAI, initially a non-profit, now grapples with the tension between these altruistic goals and the realities of business and profit-making, especially after investments like the $1 billion from Microsoft and its recent $86 billion valuation. 

It was relatively simple for OpenAI to maintain this philosophy when it was rather lonely at the top of the generative AI industry.

Now, with competition coming in hot, particularly Google’s Gemini Ultra, which directly threatens GPT-4’s superiority, it’s not particularly easy to exercise restraint and care while maintaining that coveted position at the top of the pile of AI models.

Tension at OpenAI came to a head with the leadership of CEO Sam Altman. His approach to managing OpenAI embodied the conflict between Silicon Valley’s techno-capitalism and the rising narrative surrounding AI’s risks. People speculated that Altman wasn’t taking safety seriously at the company, though this has remained unconfirmed. 

Despite board members’ concerns about his commitment to safety and transparency, Altman’s reinstatement as CEO signified a pivotal moment in the company’s history, raising questions about the influence of effective altruism and the board’s power.

The question is now, can effective altruism and long-termism ideals coexist with the fast-paced commercial and technological advancements in the AI sector? 

Regulation may safeguard AI’s value-making activities 

Earlier in the year, Luke Sernau’s memo at Google suggested that the open-source AI community posed a direct challenge to the dominance of all leading AI developers. 

The memo said, “We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be? But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source.”

Open source models, exemplified by Meta’s LLaMA and Mistral’s Mixtral, are rapidly closing the gap between grassroots innovation and big tech.

While presented as a move towards responsible AI development, the push for AI regulation by companies like Google and OpenAI may also undermine the open-source AI community, which offers a decentralized alternative to the centralized models.

Of course, open-source models are also cheaper and enable enterprises, research institutions, students, and other users to build a level of ownership and sovereignty into their solutions.

Are commercial AI developers driven by genuine concerns for AI’s safe and ethical development, or are they in-part strategic maneuvers to maintain market dominance and control over AI innovation? 

The AI industry’s intersection of altruism, politics, and commerce is exceptionally complex. As AI advances, reconciling these diverse interests will continue to divide opinion.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions