The UK government wants to see inside AI’s ‘black box’

September 29, 2023

AI Safety Summit

The UK is negotiating with major tech companies, including OpenAI and DeepMind, aiming to gain deeper insight into the safety of their AI technologies. 

As the UK gears up for the global AI Safety Summit, the government has expressed its desire to probe AI models and see inside the ‘black box.’ Sources privy to the negotiations reveal that representatives are actively seeking permission to delve into the inner workings of advanced AI models.

Major AI developers are notoriously cagey about sharing such information – might they unintentionally disclose confidential product details or evidence of copyright training data?

The UK government argues that understanding how advanced AI models work will allow authorities to preemptively spot hazards, but they haven’t explained precisely what they want to know. There are many open-source AI models that work similarly to proprietary AIs like ChatGPT, also. 

In the US, several leading AI developers signed up for a voluntary framework that would involve their models being independently tested prior to release – something that already happens in China.

Earlier in June, a preliminary agreement saw DeepMind, OpenAI, and Anthropic consenting to grant model access to the UK government for research and safety evaluations. However, the specifics of this access remain undefined. 

In a statement, Anthropic mentioned exploring the possibility of delivering the model via an API (Application Programming Interface) to balance the concerns of both parties.

While an API provides a limited glimpse into the model’s operations, the UK government is lobbying for a ‘more profound’ understanding, according to insiders informing the Financial Times

DeepMind insiders agreed that accessing models for safety research would be useful, but OpenAI has not commented.

A source close to the government explained, “These companies are not being obstructive, but it is generally a tricky issue, and they have reasonable concerns,” “There is no button these companies can just press to make it happen. These are open and unresolved research questions.” 

The UK’s AI Safety Summit in early November is a direct response to calls for stricter AI regulation. A variety of stakeholders are expected to participate, notably including Chinese officials. 

According to insiders, the UK government is working to finalize an agreement that will be unveiled at the summit.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions