This week, Microsoft, Google, OpenAI, and Anthropic formed the Frontier Model Forum, dedicated to AI safety and accountability.
The Forum’s mission includes identifying best AI practices, supporting the public and private sectors, collaborating with academics, and working with policymakers to ensure AI’s benefits are channeled and its risk mitigated.
This comes shortly after 7 AI companies opted into a voluntary AI framework outlined by the White House.
Such collaboration between rivals has been relatively common in the burgeoning AI industry, where competitors have occasionally shared hardware and research, but that’s not to say the real substance behind models doesn’t remain under lock and key.
Microsoft-backed OpenAI has been collaborating with the company’s rivals, Salesforce and Duckduckgo, showing that relationships in the AI world are complicated.
Experts believe the success of the Forum demands a level of collaboration beyond sharing resources for mutual benefit – it involves AI companies disclosing their innermost workings.
Leading Gartner analyst, Avivah Litan, has doubts about whether the Forum will achieve its aims, stating, “The chances that a group of diehard competitors will arrive at a common useful solution and actually get it to be implemented ubiquitously – across both closed and open source models that they and others control across the globe – is practically zero.”
The Forum currently consists of 4 founding members but is open to other organizations committed to safe model development and supporting broader industry initiatives.
Litan expressed optimism about the companies better-influencing regulation collectively but emphasized the need for an international regulatory body to enforce global AI standards.
“Given that we haven’t yet witnessed global governmental collaboration on climate change, where solutions are already known, expecting such cooperation on AI, with solutions still undefined, is even more challenging. At least these companies can strive to identify solutions, and that’s a positive aspect,” said Litan.
Some express optimism about the Frontier Model Forum
Dr. Kate Devlin, senior lecturer in AI at King’s College London, acknowledged the Frontier Model Forum’s positive sentiments.
“The announcement appears to concentrate primarily on technical approaches. However, when developing AI responsibly, there are broader socio-technical aspects that must be addressed,” Dr. Devlin noted.
Moreover, the Forum could simplify cooperation on regulation, ensuring rules are applied industry-wide.
For instance, Forum pledged support for the G7’s Hiroshima AI process, a pact made at the G7 Summit in Japan, requesting collaboration with the OECD and GPAI.
With little to no obligatory rules surrounding AI, the technology’s direction remains in the lap of developers, who are least superficially attempting to curb risks in the immediate term.
The results of collaborative efforts, such as the Forum, are very much pending.