Microsoft: AI not “existential threat” but needs oversight

June 30, 2023

Microsoft AI Regulation

Brad Smith, Vice Chair and President at Microsoft said in a recent interview in Brussels that AI doesn’t pose an existential threat to humanity. He did acknowledge, however, that oversight was needed. 

Some of the comments made were obviously intended to be reassuring but may be cold comfort for AI alarmists. “We need safety brakes that will ensure that AI remains under human control,” Brad Smith said. The implication is that the potential for humans to lose control remains.

Smith said that the international community needs to coordinate efforts to figure out what these guardrails should be and how to implement them. “​​I think if we do that well, we’ll recognize that this is not an existential risk,” he said. 

So keeping the world safe from AI simply relies on all countries working together in agreement. No problem, right?

EU’s Artificial Intelligence Act offers hope

Microsoft is closely following developments in Brussels as progress is made on the EU’s Artificial Intelligence Act. Recently European MEPs moved the act along when they agreed to a blanket ban on AI-powered real-time facial recognition systems in public spaces.

On the face of it, efforts at regulation like the EU AI Act and US AI regulation seem like a great idea. But it’s also an example of the possible naivety of Brad Smith’s hopes that all countries using AI could agree on the rules of play. China makes no effort to hide its use of facial recognition to enforce its social credit system.

Sanctions and blueprints may not be enough

The US signed off on export controls in 2022 to regulate who US-based AI chip makers could supply their products to. Turns out that hasn’t worked so well. And the fact that countries are having to legislate things like this shows that state and corporate interests don’t always align. 

Microsoft obviously sees the potential AI holds for their bottom line and they’re already using AI to do some really cool things in their 365 Copilot product. Their recently released 5-point blueprint for AI safety is their way of saying “Hey, we can make this work safely and profitably at the same time.” They want to be able to keep selling their products in markets like China.

Brad Smith offered hope by citing the example of how the international community has regulated international air travel. Internationally agreed rules make it possible for thousands of commercial jets to fly every day without crashing into each other. Could we do the same with AI?

That sounds like a good example. Until you remember how hijacked airplanes changed the world in dramatic fashion in September 2001. That didn’t stop us from flying, but it did make us a lot more cautious after the fact. Let’s hope the international community doesn’t need something quite that dramatic to happen before they work this out.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions