DeepMind’s CEO draws comparison between AI risks and the climate crisis

October 24, 2023

DeepMind AI

Demis Hassabis, CEO of Google DeepMind, argued that AI’s risks should be taken as seriously as the climate crisis, and the response must be immediate.

Speaking ahead of the UK’s AI Safety Summit, set to take place on November 1st and 2nd, Hassabis asserted that the world shouldn’t delay its response to the challenges posed by AI. 

“We must take the risks of AI as seriously as other major global challenges, like climate change,” Hassabis stated. 

 “It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.”

Hassabis is referring to the Intergovernmental Panel on Climate Change (IPCC). 

He and others have advocated for an independent international regulatory board for AI, similar to the IPCC or International Atomic Energy Agency (IAEA) established in 1957.

In 1955, Commissioner Willard F. Libby spoke of nuclear power, “Our great hazard is that this great benefit to mankind will be killed aborning by unnecessary regulation.”

After the IAEA was established, there were relatively few nuclear disasters until Chernobyl. We don’t yet know what an ‘AI disaster’ might look like. 

Balancing the benefits and risks of AI

While acknowledging the transformative potential of AI in fields such as medicine and science, Hassabis also highlighted the existential threats posed by the technology, especially the development of super-intelligent systems – or artificial general intelligence (AGI). 

“I think we have to start with something like the IPCC, where it’s a scientific and research agreement with reports, and then build up from there,” Hassabis advised. 

“Then what I’d like to see eventually is an equivalent of a Cern for AI safety that does research into that – but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things.”

DeepMind has published numerous works on AI safety, including a technical blog in collaboration with several universities that specifies that models should be evaluated for “extreme risks” before any training takes place. 

DeepMind has also established experimental evidence of AI seeking out new emergent goals that unpredictably diverge from its developers’ intentions – or going ‘rogue.’

Hassabis and representatives from other major AI companies like OpenAI will be attending the AI Safety Summit in November. The UK government has released the schedule for the first day of the event.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions