The rapid development of AI-driven tools has many tech experts concerned. One of them is The UK’s AI task force adviser, Matt Clifford, who warned that AI could become a superpower that can “kill many humans” in roughly two years. Clifford is working on ways of setting up the government’s AI taskforce for the Prime Minister of the UK, Rishi Sunak.
In an interview on TalkTV show, Matt Clifford spoke about the exponential growth of systems powered by artificial intelligence and the potential risks they could pose if they don’t get regulated in time. Clifford who works as the chair of the government’s Advanced Research and Invention Agency (ARIA) warned that the current systems are growing more powerful “at an ever-increasing rate.”
Different types of risks with AI
He stressed the need to regulate the use of artificial intelligence on all levels, saying that if the officials don’t make efforts to place safety regulations, the AI systems will become much more powerful in two years.
“We’ve got two years to get in place a framework that makes both controlling and regulating these very large models much more possible than it is today,” Clifford said in an interview.
He also added that AI poses different types of risks, of which some could be near-term and long-term problems.
According to the AI task force adviser, AI has the potential to “create recipes for bioweapons” as well as cause serious cyberattacks that could pose a threat to humanity. As such a threat, he noted, AI could become too powerful and lead to the deaths of many humans.
The interview took place soon after an open letter about the potential risks of AI emerged. The open letter was published by the Center for AI Safety, with many known AI experts putting their signatures on the paper. One of them is the CEO of OpenAI Sam Altman, as well as Geoffrey Hinton who is known as the “Godfather of AI.”
“They’re talking about what happens once we effectively create a new species, sort of an intelligence that’s greater than humans,” Clifford said regarding an open letter that states that regulation and minimizing the risks of AI “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
AI can still be used for good
According to Clifford, AI can still be used for good, granted that the governments on a global scale find a way to regulate it.
“If it goes right . . . you can imagine AI curing diseases, making the economy more productive, helping us get to a carbon neutral economy,” Clifford said.