The newly formed Center For AI Safety (CAIS) released a statement on Tuesday, 30th May, signed by 350 AI leaders, academics, engineers, and other notable figures.
Key signatories include AI pioneers Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman and several colleagues, the CEOs of DeepMind, Anthropic, and other leading AI companies, and senior academics from MIT, Berkeley, the University of Cambridge, and other top academic institutions worldwide.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” – Center For AI Safety (CAIS) statement on AI – 30th May, 2023.
May has been marked by an intensifying debate surrounding AI’s societal impacts.
OpenAI CEO Sam Altman testified before Congress about AI safety, and one of AI’s early pioneers, Geoffrey Hinton, left Google to speak openly about AI.
The CAIS builds upon the intentions of tech leaders to band together and form a cohesive plan to mitigate the risks of AI while emphasizing its uses and benefits. Risks are both immediate-term, e.g., mass job losses, and medium or long-term, e.g., autonomous AI begins to exceed human intelligence and we lose control.
Dan Hendrycks, the director of the Center for AI Safety, told Sky News, “Humans have been the dominant species on Earth because of our intelligence. But now, as AI is becoming more powerful and more intelligent, we won’t occupy that same position in the future.”
AI: tangible and intangible risks
How does your favorite AI-themed movie end? There is something almost innate about the risks of AI, an instinctive fear of what happens when ‘the robots take over.’
However, right now, the long-term risks of AI are primarily intangible and largely based on conjecture, abstraction, and projections.
The first tangible impact of AI is likely to be felt in the job market, where over 375 million jobs will be at risk by 2030. AI is evolving rapidly, so this may prove a conservative estimate. AI also risks discrimination, impersonation, cyber security risks, and weaponization.
Tech leaders and academics acknowledge we’re traveling through a period of rapid innovation, where technological developments outpace research.
This has happened several times throughout human history, for example, during the advent of atomic and nuclear energy and the creation of the internet. Earlier in the month, OpenAI’s Sam Altman encouraged the formation of an international agency for AI, similar to the International Atomic Energy Agency formed in the 1950s.
Whatever the risks of AI are, we’ll likely collide with them head-on, as it’s tough to envisage big tech reigning in their progress, and many analysts believe self-regulation is a pipe dream.
Skeptics doubt the progress of AI
The AI debate is not one-way traffic, and some are skeptical of how rapidly the debate has escalated toward an extinction-level narrative.
For example, we are still failing to build effective AI systems, such as driverless cars, which would appear rudimental if AI were to take over the planet.
Moreover, AI’s computerized intelligence is difficult to compare to human intelligence. Humans and other biological organisms can process complex sensory data across the five senses in mere milliseconds. Robots can’t even compete with babies when it comes to sensory tasks.
More remarkable is the energy efficiency of the human brain – it uses less power than a lightbulb. Currently, the data centers behind AIs like ChatGPT require the same amount of power as a town. Biology’s edge over computers amounts to more than intelligence.
However, while it’s easy to point out flaws in the futuristic visions of robots taking over the planet, that doesn’t rule out humans using AI to extend their own capabilities to negative end, which is a far more immediate risk. For now, the risk lies with us rather than with the technology itself.
While AI has a long way to go before it walks quite literally among us, the CAIS statement is a significant milestone for mitigating imminent risks, such as AI job replacements, deception, and weaponization.
To add credibility to the message, we need to see evidence of collaboration between the protagonists soon, which amounts to more than signing statements.