The US Department of Commerce issued a policy endorsing open models in a sign of the Biden-Harris administration’s views on the contentious issue.
Supporters of closed models like OpenAI and Google talk up the risks of open models while others like Meta are committed to variations of open-source AI.
The DOC report authored by its National Telecommunications and Information Administration (NTIA) division embraces “openness in artificial intelligence (AI) while calling for active monitoring of risks in powerful AI models.”
The arguments against openly releasing models generally point to the dual-use risks of bad actors being able to bend the models to their nefarious will. The policy acknowledges the dual-use risk but says the benefits outweigh the risks.
“The Biden-Harris Administration is pulling every lever to maximize the promise of AI while minimizing its risks,” said U.S. Secretary of Commerce Gina Raimondo.
“Today’s report provides a roadmap for responsible AI innovation and American leadership by embracing openness and recommending how the U.S. government can prepare for and adapt to potential challenges ahead.”
The report says the US government should actively monitor for potential emerging risks but should refrain from restricting the availability of open-weight models.
Open weights open doors
Anybody can download Meta’s latest Llama 3.1 405B model and its weights. While the training data hasn’t been made available, having the weights gives users a lot more options than they’d have with GPT-4o or Claude 3.5 Sonnet for example.
Access to the weights allows researchers and developers to get a better view of what’s happening under the hood and identify and rectify biases, errors, or unexpected behaviors within the model.
It also makes it much easier for users to fine-tune the models for specific use cases, either good or bad.
The report notes that the “accessibility afforded by open weights significantly lowers the barrier of entry to fine-tune models for both beneficial and harmful purposes. Adversarial actors can remove safeguards from open models via fine-tuning, then freely distribute the model, ultimately limiting the value of mitigation techniques.”
The risks and benefits of open AI models highlighted by the report include:
- Public safety
- Geopolitical considerations
- Societal issues
- Competition, innovation, and research
The report is candid in acknowledging the risks in each of these areas but says the benefits outweigh these if the risks are managed.
With a closed model like GPT-4o, we all have to trust that OpenAI is doing a good job with its alignment efforts and isn’t hiding potential risks. With an open-weight model, any researcher can identify safety and security vulnerabilities in a model or perform a third-party audit to ensure the model complies with regulations.
The report says the “availability of model weights could allow countries of concern to develop more robust advanced AI ecosystems… and undercut the aims of U.S. chip controls.”
However, in the plus column, making model weights available “could bolster cooperation with allies and deepen new relationships with developing partners.”
The US government is clearly sold on the idea of open AI models, even as it puts federal and state regulations in place to de-risk the tech. If the Trump-Vance team wins the next election, we’ll likely see continued support for open AI but with even less regulation.
Open-weight models may be great for innovation but if emerging risks catch regulators unaware there will be no putting the AI genie back in the bottle.