That famous saying: “The more we know, the more we don’t know,” certainly rings true for AI.
The more we learn about AI, the less we seem to know for certain.
Experts and industry leaders often find themselves at bitter loggerheads about where AI is now and where it’s heading. They’re failing to agree on seemingly elemental concepts like machine intelligence, consciousness, and safety.
Will machines one day surpass the intellect of their human creators? Is AI advancement accelerating towards a technological singularity, or are we on the cusp of an AI winter?
And perhaps most crucially, how can we ensure that AI development remains safe and beneficial when experts can’t agree on what the future holds?
AI is immersed in a fog of uncertainty. The best we can do is explore perspectives and come to informed yet fluid views for an industry constantly in flux.
Debate one: AI intelligence
With each new generation of generative AI models comes a renewed debate on machine intelligence.
Elon Musk recently fuelled debate on AI intelligence when he said, “AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.”
AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined. https://t.co/RO3g2OCk9x
— Elon Musk (@elonmusk) March 13, 2024
Musk was immediately disputed by Meta’s chief AI scientist and eminent AI researcher, Yann LeCun, who said, “No. If it were the case, we would have AI systems that could teach themselves to drive a car in 20 hours of practice, like any 17 year-old. But we still don’t have fully autonomous, reliable self-driving, even though we (you) have millions of hours of *labeled* training data.”
No.
If it were the case, we would have AI systems that could teach themselves to drive a car in 20 hours of practice, like any 17 year-old.But we still don’t have fully autonomous, reliable self-driving, even though we (you) have millions of hours of *labeled* training data.
— Yann LeCun (@ylecun) March 14, 2024
This conversation indicates but a microcosm of an ambiguous void in the opinion of AI experts and leaders.
It’s a conversation that leads to a never-ending spiral of interpretation with little consensus, as demonstrated by the wildly contrasting views of influential technologists over the last year or so (info from Improve the News):
- Geoffrey Hinton: “Digital intelligence” could overtake us within “5 to 20 years.”
- Yann LeCun: Society is more likely to get “cat-level” or “dog-level” AI years before human-level AI.
- Demis Hassabis: We may achieve “something like AGI or AGI-like in the next decade.”
- Gary Marcus: “[W]e will eventually reach AGI… and quite possibly before the end of this century.”
- Geoffrey Hinton: “Current AI like GPT-4 “eclipses a person” in general knowledge and could soon do so in reasoning as well.
- Geoffrey Hinton: AI is “very close to it now” and will be “much more intelligent than us in the future.”
- Elon Musk: “We will have, for the first time, something that is smarter than the smartest human.”
- Elon Musk: “I’d be surprised if we don’t have AGI by [2029].”
- Sam Altman: “[W]e could get to real AGI in the next decade.”
- Yoshua Bengio: “Superhuman AIs” will be achieved “between a few years and a couple of decades.”
- Dario Amodei: “Human-level” AI could occur in “two or three years.”
- Sam Altman: AI could surpass the “expert skill level” in most fields within a decade.
- Gary Marcus: “I don’t [think] we are all that close to machines that are more intelligent than us.”
Top AI leaders strongly disagree on when AI will overtake human Intelligence. 2 or 100 years – what do *you* think? @ylecun @GaryMarcus @geoffreyhinton @sama https://t.co/59t8cKw5p5
— Max Tegmark (@tegmark) March 18, 2024
No party is unequivocally right or wrong in the debate of machine intelligence. It hinges on one’s subjective interpretation of intelligence and how AI systems measure against that definition.
Pessimists may point to AI’s potential risks and unintended consequences, emphasizing the need for caution. They argue that as AI systems become more autonomous and powerful, they might develop goals and behaviors misaligned with human values, leading to catastrophic outcomes.
Conversely, optimists may focus on AI’s transformative potential, envisioning a future where machines work alongside humans to solve complex problems and drive innovation. They may downplay the risks, arguing that concerns about superintelligent AI are largely hypothetical and that benefits outweigh the risks.
The crux of the issue lies in the difficulty of defining and quantifying intelligence, especially when comparing entities as disparate as humans and machines.
For example, even a fly has advanced neural circuits and can successfully evade our attempts to swat or catch it, outsmarting us in this narrow domain. These kinds of comparisons are potentially limitless.
Pick your examples of intelligence, and everyone can be right or wrong.
Debate two: is AI accelerating or slowing?
Is AI advancement set to accelerate or plateau and slow down?
Some argue that we’re in the midst of an AI revolution, with breakthroughs progressing hand over fist. Others contend that progress has hit a plateau, and the field faces momentous challenges that could slow innovation in the coming years.
Generative AI is the culmination of decades of research and billions in funding. When ChatGPT landed in 2022, the technology had already attained a high level in research environments, setting the bar high and throwing society in at the deep end.
The resulting hype also drummed up immense funding for AI startups, from Anthropic to Inflection and Stability AI to MidJourney.
This, combined with huge internal efforts from Silicon Valley veterans Meta, Google, Amazon, Nvidia, and Microsoft, resulted in a rapid proliferation of AI tools. GPT-3 quickly morphed into heavyweight GPT-4. Meanwhile, competitors like LLMs like Claude 3 Opus, xAI’s Grok and Mistral, and Meta’s open-source models have also made their mark.
Some experts and technologists, such as Sam Altman, Geoffrey Hinton, Yoshio Bengio, Demis Hassabis, and Elon Musk, feel that AI acceleration has just begun.
Musk said generative AI was like “waking the demon,” whereas Altman said AI mind control was imminent (which Musk has evidenced with recent advancements in Neuralink; see below for how one man played a game of chess through thought alone).
On the other hand, experts such as Gary Marcus and Yann LeCun feel we’re hitting brick walls, with generative AI facing an introspective period or ‘winter.’
This would result from practical obstacles, such as rising energy costs, the limitations of brute-force computing, regulation, and material shortages.
Generative AI is expensive to develop and maintain, and monetization isn’t straightforward. Tech companies must find ways to maintain inertia so money keeps flowing into the industry.
Debate three: AI safety
Conversations on AI intelligence and progress also have implications for AI safety. If we cannot agree on what constitutes intelligence or how to measure it, how can we ensure that AI systems are designed and deployed safely?
The absence of a shared understanding of intelligence makes it challenging to establish appropriate safety measures and ethical guidelines for AI development.
To underestimate AI intelligence is to underestimate the need for AI safety controls and regulation.
Conversely, overestimating or exaggerating AI’s abilities warps perceptions and risks over-regulation. This could silo power in Big Tech, which has proven clout in lobbying and out-maneuvering legislation. And when they do slip up, they can pay the fines.
Last year, protracted X debates among Yann LeCun, Geoffrey Hinton, Max Tegmark, Gary Marcus, Elon Musk, and numerous other prominent figures in the AI community highlighted deep divisions in AI safety. Big Tech has been hard at work self-regulating, creating ‘voluntary guidelines’ that are dubious in their efficacy.
Critics further argue that regulation enables Big Tech to reinforce market structures, rid themselves of disruptors, and set the industry’s terms of play to their liking.
On that side of the debate, LeCun argues that the existential risks of AI have been overstated and are being used as a smokescreen by Big Tech companies to push for regulations that would stifle competition and consolidate control.
LeCun and his supporters also point out that AI’s immediate risks, such as misinformation, deep fakes, and bias, are already harming people and require urgent attention.
Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.
They are the ones who are attempting to perform a regulatory capture of the AI industry.
You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.If…
— Yann LeCun (@ylecun) October 29, 2023
On the other hand, Hinton, Bengio, Hassabis, and Musk have sounded the alarm about the potential existential risks of AI.
Bengio, LeCun, and Hinton, often known as the ‘godfathers of AI’ for developing neural networking, deep learning, and other AI techniques throughout the 90s and early 2000s, remain influential today. Hinton and Bengio, whose views generally align, sat in a recent rare meeting between US and Chinese researchers at the International Dialogue on AI Safety in Beijing.
The meeting culminated in a statement: “In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again needs to coordinate to avert a catastrophe that could arise from unprecedented technology.”
It has to be said that Bengio and Hinton aren’t obviously financially aligned with Big Tech and have no reason to over-egg AI risks.
Hinton raised this point himself in an X spat with LeCun and ex-Google Brain co-founder Andrew Ng, highlighting that he left Google to speak freely about AI risks.
Indeed, many great scientists have questioned AI safety over the years, including the late Profession Stephen Hawking, who viewed the technology as an existential risk.
Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy. A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat.
— Geoffrey Hinton (@geoffreyhinton) October 31, 2023
This swirling mix of polemic exchanges leaves little space for people to occupy the middle ground, fueling generative AI’s image as a polarizing technology.
AI regulation, meanwhile, has become a geopolitical issue, with the US and China tentatively collaborating over AI safety despite escalating tensions in other departments.
So, just as experts disagree about when and how AI will surpass human capabilities, they also differ in their assessments of the risks and challenges of developing safe and beneficial AI systems.
Debates surrounding AI intelligence aren’t just principled or philosophical in nature – they are also a question of governance.
When experts vehemently disagree over even the basic elements of AI intelligence and safety, regulation can’t hope to serve people’s interests.
Creating consensus will require tough realizations from experts, AI developers, governments, and society at large.
However, in addition to many other challenges, steering AI into the future will require some tech leaders and experts to admit they were wrong. And that’s not going to be easy.