MIT professor highlights the risks of intense AI development

September 25, 2023
MIT AI

Max Tegmark, MIT professor, physicist, and co-founder of the Future of Life Institute, has raised concerns about the relentless pace of AI development. 

He suggests that the intense competition among tech firms makes robust self-regulation less likely. 

Within the space of just one year, major players such as OpenAI and Google have released at least two generations of AI models. 

GPT-4 represented a massive improvement over GPT-3.5, beating its parameter account by some five to 10 times 

Earlier this year, Tegmark spearheaded an open letter that called for a six-month pause in developing advanced AI systems.

The letter received massive support from industry leaders, politicians, and various public figures, with over 30,000 signatories, including Elon Musk and Apple’s co-founder, Steve Wozniak.

The letter vividly described the potential dangers of unbridled AI development. It urged governments to intervene if major AI players, such as Google, OpenAI, Meta, and Microsoft, couldn’t reach a consensus on how to harness AI’s benefits while mitigating its risks. 

The Center for AI Safety (CAIS) compared AI risks to nuclear war

Following that first letter, the CAIS issued a statement backed by numerous tech leaders and academics, which compared the societal risks posed by AI with those of pandemics and nuclear war. 

The CAIS statement said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

However, despite the sentiment that self-regulation is necessary until regulation comes into force, the tech industry largely continued at the same pace of AI advancement. 

Tegmark reflected on the letter’s impact and the reasons behind its limited success in halting AI development, stating to the Guardian, “I felt that privately a lot of corporate leaders I talked to wanted [a pause], but they were trapped in this race to the bottom against each other. So no company can pause alone.”

However, Tegmark noted that debates surrounding increased political awareness about AI safety are ramping up, pointing to US Senate hearings, Chuck Schumer’s first AI Insight Forum, and November’s global AI Safety Summit in the UK.

Looking forward, Tegmark cautioned against viewing the development of a “god-like general intelligence” as a distant threat, referencing some in the AI community who believe its realization might be closer than many anticipate – namely, OpenAI.

Tegmark concluded by stressing the importance of safety standards in AI development and voiced concerns over open-source AI models, like Meta’s Llama 2. 

Drawing a parallel to other potent technologies, he cautioned, “Dangerous technology should not be open source, whether it’s bio-weapons or software.”

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions