OpenAI strikes security deal with US government, eyes $100 billion valuation

August 30, 2024

  • OpenAI is permitting US security officials inside access to its models
  • The collaboration involves the US AI Safety Institute and the NIST
  • It permits pre-release testing of OpenAI's future models
OpenAI US gov

OpenAI has signed a first-of-its-kind agreement with the US government to collaborate on AI safety research and evaluation. 

On Thursday, the U.S. Artificial Intelligence Safety Institute, part of the National Institute of Standards and Technology (NIST), announced that it had reached agreements with both OpenAI and their rival Anthropic. 

The deal comes as the company is reportedly in talks to raise funding at a staggering $100 billion valuation, showing how, despite a slower period of progress, OpenAI is still very much charging forward into the future. 

The partnerships will give the government inside access to major new AI models before and after their public release, though it’s difficult to see exactly what this amounts to.

It’s fair to say that even AI developers themselves don’t truly understand how their models work, so what the NIST stands to gain could be limited. Nevertheless, this marks an attempt to deepen government oversight of secretive frontier AI models. 

“Safe, trustworthy AI is crucial for the technology’s positive impact,” said Jason Kwon, chief strategy officer at OpenAI. “We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence.”

Elizabeth Kelly, director of the U.S. AI Safety Institute, called the deals “an important milestone as we work to help responsibly steward the future of AI.”

Under the arrangement, the institute will provide feedback to OpenAI on potential safety improvements to their models, working in close collaboration with the UK AI Safety Institute, which has also requested access to AI models’ inner workings. 

The agreements have been a long time coming in the Biden administration’s efforts to regulate AI. After Biden’s executive order, signed in October 2023, AI governance in the US has slowly got into gear, though some would argue that progress has left a lot to be desired. 

The timing of the agreement is noteworthy for two reasons. First, it comes as OpenAI is reportedly in discussions to raise a new round of funding that would value the company at over $100 billion. 

This astronomical figure represents a more than threefold increase from last year’s reported $29 billion valuation.

According to sources familiar with the matter, venture capital firm Thrive Capital is set to lead the round with a $1 billion investment. Tech giant Microsoft, already OpenAI’s largest backer, is also said to be participating in the funding.

Secondly, OpenAI is reportedly on the cusp of releasing a new product, GPT -5, SearchGPT, or some other iteration involving a codenamed “Project Strawberry.” 

Project Strawberry, initially called Q*, allegedly combines an AI model with an autonomous AI agent capable of surfing the internet. 

Crucially, OpenAI reportedly demonstrated Strawberry to US security officials, which might have formed part of this new deal with the AI Safety Institute and NIST.

OpenAI has been pretty quiet, all things considered. GPT-4o was touted as major progress, but its crown jewel – the voice chat feature – has yet to see mass rollout. OpenAI cited safety and regulatory barriers as the reason for the delay. 

Might OpenAI be striking this new partnership to help avoid such snags in the future?

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions