The Biden administration has enlisted eight more AI companies into its voluntary safety framework.
Google, Anthropic, Microsoft, Amazon, OpenAI, Meta, and Inflection joined the White House’s voluntary framework for ethical AI development and deployment in late July.
This week, the White House announced that Adobe, IBM, Nvidia, Salesforce, Cohere, Scale AI, Palantir, and Stability had joined them.
Biden’s framework encourages companies to share safety protocols and safeguarding measures, disclose the capabilities and limitations of their AI technologies, and use AI to “help address society’s greatest challenges.”
This commitment raises eyebrows given that some participating firms – namely Palantir – are known for their military AI applications and domestic surveillance technologies. The company is notorious for its global relationships with various governments and military organizations.
Palantir CEO Alex Karp recently encouraged the US to leverage AI for weaponry and military technology, and the Air Force has already conducted successful test flights with an AI-controlled jet.
Karp, speaking at a February summit on AI-military tech, indicated that the company is providing data analytics for battlefield applications to the Ukrainian military. Despite this, Karp stipulated that an “architecture that allows transparency on the data sources” should be “mandated by law.”
During a Senate Armed Services Committee hearing, Palantir CTO Shyam Sankar commented that halting AI development could compromise US technological supremacy over China. He stressed the need for the US to further invest in “capabilities that will terrify our adversaries.”
Critics suggest that the White House’s framework overlooks vital ethical considerations, such as the transparency of generative AI training data, which most firms keep under tight lock and key.
While the White House indicated that an executive order is under development to “protect American’s rights and safety,” no further details have been released.
The complexities of ethical AI are not limited to Palantir. Other major tech firms like Google and Microsoft have had their share of controversies over military contracts.
Google once led Project Maven, an initiative to use AI for drone surveillance, but abandoned it after employee protests. Palantir reportedly picked up the project afterward.
As it stands, the Biden administration’s efforts are primarily focused on non-binding recommendations and executive orders.
White House Chief of Staff Jeff Zeints told Reuters that the administration is “pulling every lever we have” to mitigate the risks associated with AI.
Meanwhile, the Majority Leader of the United States Senate, Chuck Schumer, is holding a series of “AI Insight Forums,” the first taking place tomorrow, September 13.
The reach of the White House’s voluntary framework lengthens
According to a government fact sheet covering the framework, the commitments of those who have joined thus far serve as an “important bridge to government action.”
Companies have agreed to subject their AI systems to rigorous internal and external security testing before public release.
“This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity,” stated the White House.
Notably, the firms will also commit to sharing information on managing AI risks within the industry and with governments, civil society, and academia.
This aims to form a collaborative environment for improving AI safety protocols and offers a platform for technical collaboration on thwarting attempts to circumvent safeguards.
“In developing these commitments, the Administration consulted with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK,” the White House revealed.
There are other initiatives designed to bolster US resilience against AI, such as a competition called the “AI Cyber Challenge” in collaboration with DARPA.
AI regulation seems to be accelerating towards a critical mass, with the EU also progressing towards the final version of its AI Act.