A US government-commissioned report says that new AI safety measures and policies are needed to prevent an “extinction-level threat to the human species.”
The report, which is available on request, was compiled by Gladstone AI, a company set up in 2022 focused on advising US government departments on AI opportunities and risks. The report, titled “An Action Plan to Increase the Safety and Security of Advanced AI”, took a year to complete and was funded by $250,000 of federal money.
The report focuses on catastrophic risks from advanced AI and proposes a comprehensive action plan to mitigate them. The authors obviously don’t share Yann LeCun’s more laissez-faire views of AI threats.
The report says “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”
The two main risk categories it raises are the intentional weaponization of AI, and the unintentional consequences of an AGI going rogue.
To prevent these from happening, the report provides an action plan made up of five lines of effort, or LOEs, that the US government need to put in place.
Here’s the short version:
LOE1 – Establish an AI observatory for better monitoring of the AI landscape. Establish a task force to set rules for responsible AI development and adoption. Use supply chain restrictions to drive conformance of international AI industry players.
LOE2 – Increase preparedness for advanced AI incident response. Coordinate interagency working groups, governmental AI education, early warning framework for detecting emerging AI threats, and scenario-based contingency plans.
LOE3 – AI labs are more focused on AI development than AI safety. The US government needs to fund advanced AI safety and security research, including in AGI-scalable alignment.
Develop safety and security standards for responsible AI development and adoption.
LOE4 – Establish an “AI regulatory agency with rulemaking and licensing powers.”
Establish a civil and criminal liability framework to prevent “WMD scale, unrecoverable consequences including “emergency powers to enable rapid response to fast-moving threats.”
LOE5 – Establish an AI safeguards regime in international law, and secure the supply chain. Drive “international consensus” on AI risks with an AI treaty enforced by the UN or “international AI agency”.
In summary, AI could be very dangerous so we need a lot of laws to control it.
The report says that advanced open-source models are a bad idea and that the US government should consider making it illegal to release the weights of AI models under penalty of imprisonment.
If that sounds a little alarmist and heavyhanded to you then you’re not alone. The report has come in for some criticism for its lack of scientific rigor.
in case you were wondering how seriously to take this recent report pic.twitter.com/Uy47sTmE0Z
— xlr8harder (@xlr8harder) March 12, 2024
Open-source advocates like William Falcon, CEO of Lightning AI, were especially critical of the blanket statements over the dangers of open models.
Farenheight 451 but for model weights…
According to TIME, open source AI will cause an “extinction level event to humans” 🙄
first of all, very silly claim. but if you really want to put on your tin hat, close source AI is more likely to cause thishttps://t.co/x4bctSjamZ pic.twitter.com/PYuwnoIbda
— William Falcon ⚡️ (@_willfalcon) March 12, 2024
The truth about the risks advanced AI poses for humanity probably lies somewhere between, “We’re all going to die!”, and “There’s no need to worry.”
Page 33 of an AI survey referenced by the report gives some interesting examples of how AI models game the system and trick operators to reach the objective they’re designed to optimize for.
When AI models already exploit loopholes to reach a goal, it’s hard to dismiss the possibility of future supersmart AI models doing the same. And at what cost?
You can read the executive summary of the report here.