OpenAI builds new “Preparedness” team to handle AI’s existential risks

October 29, 2023

OpenAI AGI

OpenAI has assembled a team of experts named “Preparedness,” aiming to mitigate the “catastrophic” risks associated with AI.

This comes not long after OpenAI comprehensively rebranded itself as an AGI-centric company. The new team is dedicated to evaluating both current and future AI models for risks. 

Risks specified by OpenAI individualized persuasion (tailoring messages to precisely what the recipient wants to hear), threats to cybersecurity, autonomous replication, adaptation (AI altering itself independently), and even existential threats such as chemical, biological, and nuclear attacks.

Experts are largely divided over whether AI’s existential risks are hype-driven or practically realistic, though most agree that current problems, such as AI deep fakes, are already tremendously disruptive. 

OpenAI’s statement on the matter is clear: “We believe that frontier AI models, which will exceed the capabilities of the most advanced existing models, have the potential to benefit all of humanity. However, they also pose increasingly severe risks.” 

Artificial general intelligence – AGI – defines AI models that equal and surpass human cognition across multiple modalities. 

OpenAI CEO Sam Altman has made his intention to create such models clear, but the concept of AGI in real terms remains highly speculative. 

Altman recently admitted on the Joe Rogan podcast that AI hadn’t developed as he expected. AGI might not occur spontaneously as a ‘singularity’ but slowly over the forthcoming decade or so. 

OpenAI on Preparedness

In a recent blog post, OpenAI Safety & Alignment and Community teams highlighted the organization’s ongoing commitment to addressing the full spectrum of safety risks associated with AI.

This move comes after OpenAI, along with other prominent AI labs, voluntarily committed to upholding safety, security, and trust in AI development as part of a collaboration with the White House. 

The Preparedness team is led by OpenAI researcher Aleksander Madry. The team will be crucial in assessing capabilities, conducting evaluations, and performing internal red teaming for frontier models, ranging from near-future developments to AGI models. 

Additionally, the team will develop and maintain a “Risk-Informed Development Policy (RDP),” outlining an approach to evaluating and monitoring frontier model capabilities, implementing protective measures, and establishing governance structures for accountability and oversight.

OpenAI is also launching the “AI Preparedness Challenge” to uncover areas of AI misuse. 

The challenge will reward top submissions with $25,000 in API credits, publish novel ideas and entries, and potentially recruit candidates for the Preparedness team. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions