Is OpenAI sitting on a dangerous AI model that led to Altman’s firing?

November 23, 2023

OpenAI superintelligence

Another day, another twist in the OpenAI-Altman saga. 

This time, the reason for Altman’s firing is an apocalyptically powerful AI model sitting in an OpenAI research lab, or at least that’s what media sources suggest.

Just days before Sam Altman’s temporary departure from OpenAI, sources interviewed by Reuters allege the company’s researchers sent a warning letter to the board of directors.

This letter, which wasn’t publicly disclosed until recently, raised alarms about an AI breakthrough. According to two insiders who contacted Reuters, it’s potent enough to threaten humanity.

Sources allege the model in question may have been pivotal in the events leading to Altman’s firing. 

The project in question is known as Q* (pronounced Q-Star). Q* is seen by some at OpenAI as a potential milestone in the quest for artificial general intelligence (AGI). Q* is an amalgamation of machine learning approaches, including Q-learning, which dates back to the 1980s. 

While the media loves an apocalyptic AI story, these anonymous sources indicated that the board’s decision to fire Altman was influenced by concerns about prematurely commercializing Q* without fully understanding its implications. 

However, Reuters was not been able to independently confirm the claimed capabilities of Q* as described by the researchers.

Moreover, Reuters has not had access to the letter, and the staff responsible for writing it have not responded to inquiries. 

It doesn’t leave us much to work with. You’d have to think the fact almost every OpenAI employee pleaded for Altman’s return makes it seem unlikely there were just two who were worried about Q*.

Following Altman’s rejection of fears over Q*, the board decided to dismiss Altman – or at least, that’s what this letter and its associated news stories allege. 

But is there any substance to this? Or is it just another strange and speculatory twist in the OpenAI boardroom drama?

What is Q*, and how does it work?

While speculative, Q* (Q-Star) could combine elements of Q-learning and A* (A Star) search algorithms optimized through a process called Reinforcement Learning from Human Feedback (RLHF). 

It’s not entirely unique, and papers have speculated about techniques related to Q* before. These can give us some clues as to how it works. 

Let’s break down each component to understand how they might interact in Q*:

Q-learning in Q

Q-learning is a type of reinforcement learning algorithm that has been around for some 30 years. It’s designed to help an agent learn the best actions to take in a given state to maximize a reward. This is done by learning a value function known as a Q-function, which estimates the expected use of taking a given action in a given state.

In the context of generative AI models like those OpenAI develops, Q-learning could determine the optimal sequence of words or responses in a conversation or a problem-solving task. 

Each word or response can be seen as an action, and the states can be the context or the sequence of words already generated.

A search algorithm in Q

A* is a popular graph search algorithm known for its efficiency and effectiveness in finding the shortest path from a start node to a target node in a graph. 

The mention of Q* needing “vast computing resources” and being capable of solving mathematical problems suggests that A* could be integrated with Q-learning to handle complex, multi-step reasoning processes. 

The algorithm could optimize decision-making over multiple steps by storing intermediate results and efficiently searching through possible sequences of actions (or words/responses).

Role of RLHF

RLHF involves training AI models using human feedback to guide the learning process. This can include demonstrating the desired outcomes, correcting mistakes, and providing nuanced feedback to refine the model’s understanding and performance.

In Q*, RLHF might be used to refine the model’s ability to make decisions and solve problems, especially in complex, multi-turn scenarios where nuanced understanding and reasoning are critical.

That’s how Q* might work, but it doesn’t really tell us how or why it’s so alarming, nor does it offer any clarity on the truth of the letter’s claims.

Time only will tell whether Q* is genuine and whether it poses any risk. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions