The ‘Godfather of AI’ fears AI could take over humanity

October 10, 2023

Geoffrey Hinton, often referred to as the ‘Godfather of AI’ because of his pioneering work in artificial neural networks, fears that AI could soon take over humanity.

In an interview on 60 Minutes, Hinton explained that while he couldn’t be certain of the future of AI, things could go very wrong, very soon if we don’t take proper precautions.

Hinton doesn’t believe that AI is sentient in a self-aware manner yet, but that “we’re moving into a period when for the first time ever, we have things more intelligent than us.”

Hinton isn’t your typical AI doomsayer. He’s proud of his pioneering work and his accomplishments in AI while working at Google. He sees the obvious benefits of using AI in healthcare and the development of new drugs.

The danger he highlights is in the fact that we don’t really know how complicated AI systems ‘think’.

Referring to AI’s ability to self-learn Hinton said “We have a very good idea sort of roughly what it’s doing. But as soon as it gets really complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain.”

As AI improves, it’s not likely to become any easier to get to grips with the inner workings of its black box.

“I think in five years time it may well be able to reason better than us,” Hinton said. Essentially humans could soon become the second-most intelligent beings on the planet and lose control over AI.

Hinton said, “One of the ways these systems might escape control is by writing their own computer code to modify themselves. And that’s something we need to seriously worry about.”

 

If chimps decided to rise up one day and attack humans, the humans would win because we’re smarter. If AI became smarter than humans and decided it didn’t like us, we would suffer the same fate.

Couldn’t we just turn the computers off if they begin to misbehave? Hinton says that after reading and studying all human literature and political connivances AI would become a master manipulator of people to prevent that happening.

When asked if AI would take over humanity Hinton said, “Yes, that’s a possibility. I’m not saying it will happen. If we could stop them ever wanting to, that would be great. But it’s not clear we can stop them ever wanting to.”

Hinton resigned from Google in May this year citing AI risks that he didn’t feel were being addressed. He joined 350 AI leaders in signing a statement released by the Center For AI Safety (CAIS).

The CAIS statement said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Hinton clearly still feels that this isn’t overstating the risks.

“These things do understand, and because they understand, we need to think hard about what’s next, and we just don’t know.”

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions