The battle of the ‘AI godfathers’ – is AI risky or not?!

November 2, 2023

AI risks

AI is risky, right? That’s what we’ve all been told this last year. But not everyone agrees, including some top researchers. 

X has been ablaze with rows over whether AI poses genuine existential risks. We’re not talking misinformation or deep fakes – though these are already bad – but risks on par with nuclear disasters and pandemics.

This debate has been led by AI risk skeptic and Meta chief AI head Yann LeCun, considered one of the so-called ‘AI godfathers’ alongside Yoshua Bengio and Geoffrey Hinton. 

In LeCun’s firing line are OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amode, whom LeCun accused of “massive corporate lobbying.”

Specifically, LeCun is concerned that big tech bosses are ramping up chat surrounding AI risk and misuse to get regulators to lock down the industry in their favor.

Big tech has the teeth to deal with regulation, whereas smaller firms and open-source developers don’t. LeCun also doubts the often-touted vision of an AI ‘singularity’ where the technology suddenly becomes more intelligent than humans, thus signaling a new era of superintelligent AI. 

Others, including Altman, have now said they expect AI development to be more granular and progressive rather than ‘taking off’ overnight.

According to LeCun’s arguments, big tech can wield AI risks to their benefit, enabling them to reinforce market structures and close competition out. The real risk, according to LeCun, is a select few AI developers owning the industry and dictating its trajectory. 

He stressed the gravity of the situation, stating, “If these efforts succeed, the outcome would be a catastrophe because a small number of companies will control AI.” This statement reflects the broader consensus in the tech world about AI’s transformative potential, equating its significance to that of the microchip or the internet.

LeCun’s remarks responded to renowned physicist Max Tegmark, who had implied that LeCun was not taking the existential risks of AI seriously enough.

In a public post, Tegmark acknowledged the efforts of global leaders in recognizing the potential dangers of AI, stating, they “Can’t be refuted with snark and corporate lobbying alone.”

Amidst the rise of AI’s influence, figures like Altman and Hassabis have become central to the public discourse on technology. However, LeCun criticizes them for fueling fear about AI while simultaneously profiting from it.

In March, over a thousand tech leaders, including Elon Musk, Altman, and Hassabis, advocated for a pause in AI development, citing significant societal and humanitarian risks.

LeCun, however, argues that these dramatic warnings are a smokescreen, diverting attention from immediate issues such as worker exploitation and data theft.

LeCun calls for a refocusing of the discussion on the present and immediate future of AI development. He expressed his concerns about the potential obliteration of the open-source AI community if AI development becomes confined to private, for-profit entities.

For LeCun, the stakes are high: “The alternative, which will inevitably happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people’s entire digital diet,” he cautioned, urging consideration of the implications for democracy and cultural diversity.

LeCun’s backers

LeCun’s arguments are well-backed on social media and attract agreement among commenters and industry experts.

For instance, Andrew Ng, a pivotal figure in AI development and cofounder of Google Brain, also raised concerns that major technology companies may be manipulating the discourse on AI to gain market dominance.


In an interview with The Australian Financial Review, Ng highlighted a trend among tech giants to amplify AI fears, specifically the idea that it could lead to human extinction. Like LeCun and others, he suggests this narrative is being used strategically to prompt stricter AI regulation, thereby hindering open-source initiatives and smaller competitors.

“There are definitely large tech companies that would rather not have to try to compete with open source, so they’re creating fear of AI leading to human extinction,” Ng explained. “It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community.”

Earlier in the year, a leaked Google memo seemingly admitted that big tech was losing ground to open-source. 

The memo said, “Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”

Open-source AI is fast, non-monolithic, private, and, above all else, cheap. Big tech is already struggling to monetize AI, so it all makes sense on paper: regulate the AI industry to help the titans prevail. 

Distractions from immediate threats

There is another dimension to this debate that goes beyond money.

Many have reiterated that AI’s present risks, such as extremely sophisticated deep fakes and misinformation, are already sufficient to warrant lengthy debate and rapid action. 

Aidan Gomez, a prominent researcher in AI and CEO of Cohere, also highlighted the risk of focusing too heavily on doomsday scenarios. 

Speaking ahead of this week’s AI Safety Summit, Gomez pointed out that immediate threats such as misinformation and the erosion of social trust are being overshadowed by discussions on long-term existential risks.

“I think in terms of existential risk and public policy, it isn’t a productive conversation to be had,” Gomez stated, emphasizing the need to prioritize immediate risks.

“As far as public policy and where we should have the public-sector focus – or trying to mitigate the risk to the civilian population – I think it forms a distraction, away from risks that are much more tangible and immediate.”

Gomez highlights the pervasive influence of AI in products used by billions, emphasizing the urgent need to address risks actively impacting the public. “This technology is already in a billion user products, like at Google and others. That presents a host of new risks to discuss, none of which are existential, none of which are doomsday scenarios,” he explained. 

He identifies misinformation as a primary concern, given AI models’ ability to create media “virtually indistinguishable from human-created text or images or media.”

Misinformation is indeed public enemy number one in terms of AI risks, seeing as we’ve already observed examples of deep fakes being deployed in successful scams.

Deep fakes deployed before post-time at the Slovakian election showed how they can tangibly affect democracy. 

Yoshua Bengio, Geoffrey Hinton, and Elon Musk join the debate

‘AI godfather’ Yoshua Bengio, along with an assembly of over 200 tech leaders and researchers, endorsed an open letter highlighting the critical need for immediate and substantial action.

Bengio is now spearheading an international report unveiled at the UK’s AI Safety Summit. He is very much convinced of AI risks, as is Geoffrey Hinton, meaning 2/3 of the ‘AI godfathers’ are worried on some level. 


On his website, Bengio wrote, “I recently signed an open letter asking to slow down the development of giant AI systems more powerful than GPT-4  –those that currently pass the Turing test and can thus trick a human being into believing it is conversing with a peer rather than a machine.”

Geoffrey Hinton from the AI godfather triumvirate had left Google to ‘speak out’ about his fears over AI. He went as far as saying he regretted his work because of how AI could be misused, stating in an interview, “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

Hinton, like Bengio, supports the view that AI poses serious extinction-level risks. He also doubts that open-sourcing models could support safety efforts. 

He recently highlighted that his choice to leave Google was at odds with LeCun et al.’s critical stance, stating on X, “Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy. A data point that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat.”

Hinton, Bengio, and other top AI researchers, including influential Chinese computer scientist Andrew Yao, recently backed a paper describing AI’s risks.

In Hinton’s words, “Companies are planning to train models with 100x more computation than today’s state of the art, within 18 months. No one knows how powerful they will be. And there’s essentially no regulation on what they’ll be able to do with these models.”

This debate rambles on and will continue to do so.

One X user quipped the following:


LeCun has another antagonist in Elon Musk, who is seldom absent from a high-level X debate, especially not one that firmly sits in his wheelhouse (not to mention his platform).

Musk has a well-documented history of expressing his concerns about AI, but his stance has become highly unpredictable.

In recent times, Musk has consistently labeled AI as a potential existential threat to humanity, stating developing complex AI was like “summoning the demon.” At the AI Safety Summit, however, he said he thought AI would yield net positive benefits. 

In recent months, Musk has criticized OpenAI, highlighting a deviation from the organization’s original mission and expressing concerns about its ‘closed-source’ direction, influenced heavily by Microsoft. 

Both Musk and LeCun broadly agree in their criticisms of closed-source AI, showing that to uphold AI’s risks is not always to denounce open-source.

On the topic of open-source, let’s see what Musk chooses to do with any products his AI startup xAI releases.

Google DeepMind boss hits back at LeCun’s ‘fearmongering’ claim

The head of Google DeepMind, Demis Hassabis, responded to allegations made by LeCun, accusing the company of engaging in extensive corporate lobbying to dominate the AI industry. 

In an interview with CNBC, Hassabis emphasized that DeepMind’s engagement in AI discussions is not an attempt to achieve “regulatory capture.”

In response to these claims, Hassabis stated, “I pretty much disagree with most of those comments from Yann.” 

He outlined three main categories of risks associated with AI, emphasizing the importance of addressing immediate issues like misinformation, deep fakes, and bias while also considering the potential misuse of AI by malicious actors and long-term risks associated with artificial general intelligence (AGI). 

He advocated for initiating conversations about regulating superintelligent AI sooner rather than later to avoid dire consequences.

Highlighting the global nature of AI technology, Hassabis underscored the need for international cooperation, including engaging with China, to establish responsible development and regulation standards. 

Both he and James Manyika, Google’s senior vice president of research, technology, and society, expressed a desire to see a global consensus on AI. Hassabis attended the UK’s AI Safety Summit, where he aligned with sentiment from world leaders to emphasize AI safety and oversight. 

Despite the ongoing tech tensions between the US and China and the hesitancy of US tech giants to engage in commercial work in China, Hassabis emphasized the necessity of communication, stating, “I think we have to talk to everyone at this stage.”

Is there a middle ground?

The debates surrounding AI’s potential risks and the necessary regulatory responses are reaching a critical juncture.

Recent weeks have seen a snowballing in ‘official talks’ about AI, but regulation is still in the pipeline.

US regulation and the EU’s AI Act will lay down the first hard foundations for AI governance. 

LeCun’s corner makes a compelling argument surrounding AI monetization in the face of open-source competition. From Google’s leaked memo to the cohesive existential risks put forward by big tech bosses in the mainstream media, he and his backers have plenty of ammunition. 

However, the lack of consensus runs deeper than the heads of profit-making AI developers at Microsoft, DeepMind, Anthropic, and so on.

Bengio and Hinton are mainly unaffiliated, and the late Professor Stephen Hawking, for example, famously stated that it “could spell the end of the human race” and may turn out to be the “worst event in the history of our civilization.” 

Researchers have demonstrated the ability of AI systems to autonomously develop emergent goals, which could have catastrophic outcomes if not properly managed. Other experiments have shown that AI systems can actively seek resources, accumulate power, and take steps to prevent being shut down. 

There is robust evidence of AI-related harm, but is it proportionate to the alarm bells being rung by big tech bosses?

And perhaps more to the point, can they really be trusted to handle the industry ethically for all, regardless of the presence or absence of risk?

Perhaps we should view the actions of big tech as the ultimate gamble.

AI is risky, most likely, but a select few firms want to own that risk, push it to its limits, and shape it how they see fit, for right or for wrong.

Big tech might load the game so the house always wins for as long as it stays standing. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions