Google, Anthropic, Microsoft, Amazon, OpenAI, Meta, and Inflection — these 7 AI industry giants have agreed to voluntary commitments established by the White House.
“We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden spoke of AI. He discussed deep fakes on social media, among other risks.
The official statement, published on the White House website, highlights several key processes AI companies must abide by when developing and releasing products to the public.
“U.S. companies lead the world in innovation, and they have a responsibility to do that and continue to do that, but they have an equal responsibility to ensure that their products are safe, secure, and trustworthy,” said White House Chief of Staff Jeff Zients in a recent NPR interview.
However, the process of holding AI companies to account if they fail to fulfill their pledges remains fuzzy. “We will use every lever that we have in the federal government to enforce these commitments and standards. At the same time, we do need legislation,” explained Zients.
The White House is working in tandem with Congress to formulate AI legislation to accentuate its potential for good while controlling risks. A slow starter on AI regulation, the US has made significant strides to establish a position on the technology to steer it toward positive ends.
What are the commitments?
The commitments involve transparency, testing, and information sharing with the public and government.
AI developers are committing to having their products tested “carried out in part by independent experts” to shield against significant risks. Google and OpenAI will have their AI systems tested at the upcoming DEF-CON hacking convention in Las Vegas, a move endorsed by the White House.
Other promises include developing methods to indicate when content is AI-generated, such as watermarks, avoiding bias and discrimination, and safeguarding privacy.
These agreements, however, are only the beginning. “The commitments the companies are making are a good start, but it’s just a start,” said Zients. “The key here is implementation and execution in order for these companies to perform and earn the public’s trust.”
The White House has also been conducting discussions with civil rights and union leaders to understand AI’s influence on their work.
However, Ifeoma Ajunwa, a law professor at Emory, expresses disappointment that the White House has primarily collaborated with the AI companies themselves, “We also want to ensure that we are including other voices that don’t have a profit motive,” she said.
“We should definitely invite corporate leaders and tech titans to be part of this conversation, but they should not be leading the conversation.”
The new agreements could unfairly favor larger, established companies, stemming the growth of startups.
Victor Menaldo, a political economy professor at the University of Washington, suggests that “the bigger established firms can kind of game it to benefit them, and the newcomers don’t have a say.”
Regulation will also hit the open-source community, which doesn’t necessarily have the trust and governance clout as established tech firms like Google and Microsoft.
Compliance will likely be expensive and threatens to burden startups.
White House officials didn’t clarify whether new companies would be added to the agreements, “We expect that other companies will see how they also have an obligation to live up to the standards of safety security and trust. And they may choose — and we welcome them choosing — joining these commitments.”
Another voluntary framework, albeit of slightly higher stead than others.
Will it have any practical impact on AI development? Only time can tell.