UK report lists potential AI risks and doomsday scenarios

October 26, 2023

AI UK Sunak

The UK government released its “Capabilities and risks from frontier AI” report which will form the basis of discussion at the upcoming AI Safety Summit.

The report provides an overview of the current and future capabilities and risks associated with frontier models. Frontier models are highly capable general-purpose AI models that can perform a wide variety of tasks, like GPT-4 or Llama 2.

Before addressing current and future risks, the report acknowledges that AI has the potential to improve our lives in many ways. In its introduction, the report said that because of AI “we find ourselves in the midst of a technological revolution that will fundamentally alter the way we live, work, and relate to one another.”

It’s the risks and doomsday scenarios outlined in the report that make for particularly interesting reading.

Future risks of AI

The report noted that “The complexity of open-ended domains makes it difficult to design safe systems or exhaustively evaluate all downstream use cases.” Basically, they’re saying that AI is a tool much like a pen; you can make it work well but can’t stop someone using it to write bad stuff.

The subsequent risks that come from the dual-use nature of AI fill tens of pages in the report and make you wonder what the summit will achieve in 2 days.

One notable risk raised in the report is of future AI actively pursuing autonomy. The report said, “Loss of control could be accelerated if AI systems take actions to increase their own influence and reduce human control.”

It noted that future AI could develop both the capability as well as the disposition to do just that.

While acknowledging that some experts feel the risks of AI have been overstated, the report doesn’t dismiss the more existential threats.

AI doomsday scenarios

The report said, “Given the significant uncertainty in predicting AI developments, there is insufficient evidence to rule out that highly capable future Frontier AI systems, if misaligned or inadequately controlled, could pose an existential threat.”

One of the ways that could happen is if “As AI capability increases, humans grant AI more control over critical systems and eventually become irreversibly dependent on systems they don’t fully understand.”

If the lights go out and AI is the only one that knows where the switch is, then we could be stuck in the dark for some time.

In one futuristic scenario, the report describes what it terms an “AI Wild West”. In this scenario set in 2030, “Authorities are struggling with the volume and diversity of misuse. A focus on tackling the immediate impacts of this crisis has made it hard to reach a global consensus on how to manage the issues long-term.”

That doesn’t sound like the future, it sounds like what’s happening right now. A two-day summit is unlikely to make much of a dent in the alarming risks detailed in this report.

The AI Safety Summit is predicated on the idea that “safe” AI is possible if we just try hard enough. But is it?

If an AGI is already in the wild, it could easily read the concerns outlined in this report and then hide its ability or intention to play out these scenarios. Until it’s too late.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions