2,778 researchers weigh in on AI risks – what do we learn from their responses?

January 4, 2024

AI risks

A large-scale survey encompassing 2,700 AI researchers uncovered divided opinions regarding the risks of AI.

The survey conducted by AI Impacts, the largest of its kind, involved professionals who have published research at six leading AI conferences.

Participants were asked to weigh in on future AI milestones and their societal implications. 

Notably, almost 58% of these researchers believe there’s a 5% chance of human extinction or similarly dire outcomes due to AI advancements.

Other findings suggest that AI has a 50% or higher likelihood of mastering tasks ranging from composing chart music to developing a complete payment processing website within the next decade. 

More complex tasks, like installing electrical wiring or solving mathematical problems, are expected to take longer, though DeepMind recently demonstrated AI’s superior performance in tackling the cap set and bin packing problems. 

Researchers also estimate a 50% chance of AI surpassing human performance in all tasks by 2047 and the automation of all human jobs by 2116. This relies on science ‘remaining undisrupted.’ 

The results are certainly mixed, but it’s fair to say AI risks are not deemed to have reached a critical mass yet. 

Katja Grace from the Machine Intelligence Research Institute in California stated of the study, “It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity.”

Here’s a list of the key stats and findings:

  • Probability of AI achieving milestones by 2028: There’s a 50% chance that AI systems will be able to construct a payment processing site, create a song indistinguishable from that of a popular musician, and download and fine-tune a large language model (LLM).
  • Human-level AI performance predictions: The study estimates a 10% chance by 2027 and a 50% chance by 2047 of unaided machines outperforming humans in all tasks. This 2047 forecast is 13 years earlier than the one made in 2022.

  • Automation of human occupations: There is a 10% chance of all human occupations becoming fully automatable by 2037 and a 50% chance by 2116, moving forward from the 2164 prediction made in 2022.
  • Outlook on AI’s long-term value: 68.3% of respondents think good outcomes from “superhuman AI are more likely than bad, but 48% of these optimists still see at least a 5% chance of extremely bad outcomes like human extinction. Meanwhile, 59% of pessimists see a 5% or greater likelihood of extremely good outcomes.
  • Concerns over AI-driven scenarios: Over 70% of researchers express concern regarding issues like deep fakes, manipulation of public opinion, and engineered weaponry. Deep fakes caused the most alarm, with 86% of researchers expressing concern. 
  • Probability estimates for AI risks: Between 37.8% and 51.4% of respondents believe there’s at least a 10% chance that advanced AI could lead to outcomes as severe as human extinction.

  • Probability estimates for HLMI (High-Level Machine Intelligence): The 2023 aggregate forecast predicts a 50% chance of achieving HLMI by 2047, marking a significant shift from the 2060 prediction made in 2022.
  • Probability of full automation of labor (FAOL): The forecast for 2023 estimates a 50% chance of FAOL by 2116, 48 years earlier than a similar prediction made in 2022.
  • AI safety research: About 70% of survey participants believe AI safety research should be more prioritized than it currently is.

Immediate risks eclipse the long-term

Beyond the existential risks, the survey highlights that a substantial majority of researchers hold immediate concerns about AI.

Over 70% express significant worry over AI-driven issues like deep fakes, manipulation of public opinion, engineered weaponry, authoritarian control, and escalating economic inequality. 

Émile Torres at Case Western Reserve University in Ohio specifically warns about AI’s role in spreading disinformation on critical topics and its impact on democratic governance, stating, “We already have the technology, here and now, that could seriously undermine [the US] democracy.” 

Political campaigns launched by US and Canadian politicians have already used AI to spin up images, and AI robocalls are currently being deployed to drum up voters. 

Another significant incident was Slovakia’s recent election, where, just 48 hours before the polls, a falsified audio clip allegedly featuring key political figures discussing vote-buying tactics emerged on social media.

It triggered widespread confusion during a critical pre-election silence period. The timing of the release made promptly counteracting its effects nearly impossible. 

More recently, AI-generated smear campaigns targeted the hotly contested Bangladeshi election, with content aiming to discredit the opposing party by stating their support for bombing Gaza, a sensitive topic in a Muslim-majority country. 

AI risk timelines are debated

Comparisons between the short-term and long-term impacts of AI were the source of fierce debate when the Center for AI Safety (CAIS) released a dramatic statement in 2023 comparing AI risks to pandemics and nuclear war. 

Cue a deluge of media coverage discussing the worst predictions for AI development, from murderous robots to out-of-control drones on the battlefield. 

Thus far, it’s how humans are using AI that’s posing risks – a case of “AI doesn’t kill people, humans do” – and worry about existential risks are sometimes seen as eclipsing what’s going on right now.

Deep fakes, in particular, have proven tangible impacts on human society.

AI Pentagon explosion
This fake AI-generated image of an explosion at the Pentagon temporarily impacted financial markets in early 2023. Source: X.

There’s another layer to this, as spearheaded by Yann LeCun, a prominent figure in AI, who suggests that larger tech companies might exaggerate these risks to sway regulatory actions in their favor. 

Fears, often promoted by big tech, could be a strategic ploy to encourage stringent regulations, which may disadvantage smaller entities and open-source initiatives. 

This uncovers a subtler danger – a few large companies’ potential monopolization of AI, potentially stifling innovation and diversity in AI.

That leaves the major players in the space, like Google and OpenAI, to push the industry in their own direction so long as they can appease compliance demands.

Existential risks – what does the evidence say?

The dialogue on risks often swings between two extremes: skepticism and alarmism. So, what actual evidence is there to suggest AI might kill us or otherwise destroy the planet?

The idea of AI weaponization is perhaps the most imminently imaginable example. Here, AI systems could kill humans with limited oversight, or threat actors might use the technology to develop cataclysmically effective weapons. 

Drones are undoubtedly already capable of destroying targets with minimal human input, as has arguably occurred in the Israel-Palestine and Russia-Ukraine conflicts already.

The XQ-58A Valkyrie unmanned aircraft became fully AI-automated in 2023. Source: Wikimedia Commons.

A more subtle driver of AI existential risk is enfeeblement. The concern here is not just about losing jobs to robots but about losing essential human capabilities and creativity. A common analogy is the film WALL-E, where blob-like humans have robots to do their bidding at the cost of Earth itself.

There is some evidence that LLMs like ChatGPT are eroding students’ critical thinking, though it’s far too early to tell for sure.

Educational institutions are keen to harness the benefits of AI without killing off human-led academia, with several high-profile establishments conditionally endorsing student use of the technology.

WALL-E AI
WALL-E depicts a popular example of AI or tech-related ‘enfeeblement.’

Another reasonably well-documented risk is AI developing emergent goals, as highlighted by research from the University of Cambridge and DeepMind, which could draw unintended and potentially harmful consequences for human society.

Emergent goals could see highly intelligent AIs establish their own unpredictable objectives, possibly even escaping their architecture to ‘infect’ other systems. 

Ultimately, AI’s potential for dramatic and violent extinction is highly speculative, and there are few examples or evidence that this could happen soon. Experimental evidence surrounding emergent goals and other potential methods of autonomous AIs inflicting harm is growing, however.

For now, we’ve got plenty of immediate risks to handle, and let’s not forget these challenges will unfold parallel to climate change, habitat loss, population crises, and other macro-level risks.

While there is excitement about AI’s potential achievements, there’s also a palpable concern about its immediate risks and long-term implications. 

This latest survey serves as a reminder that the discourse on AI should not be skewed towards distant existential risks at the expense of addressing the immediate challenges and ethical dilemmas posed by this transformative technology. 

As AI advances, striking a balance in this debate becomes crucial for guiding its development toward beneficial and safe outcomes for humanity.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions