Google, Microsoft, Anthropic, and OpenAI have announced the formation of the Frontier Model Forum, an industry-led collective committed to safeguarding the future of ‘frontier’ AI models.
The Frontier Model Forum aims to promote positive AI development, enhance safety research, identify best practices, and create a collaborative nexus for communicating with the public and stakeholders.
Blog posts from Google and Microsoft highlight the Forum’s membership is open to other AI developers, but the founding members are notable for including both Google and Microsoft-backed startups OpenAI and Anthropic.
Inter-industry collaboration on AI has been a talking point for much of the year.
This is among the first concrete evidence we’ve seen that Microsoft and Google, at least, will work together on AI – it will be interesting to see whether Meta, Amazon, Inflection, and others join too.
“Frontier models,” a term coined by the Forum, define large-scale machine-learning models that eclipse the capabilities of existing models. With their capacity to perform a wide range of tasks, these frontier models have opened new frontiers for tech innovation, hence the name.
Frontier models could be described as ‘generalist agents,’ capable of performing multi-skilled, multi-modal tasks across numerous disciplines.
The announcement of the Frontier Model Forum is concurrent with mounting pressures on AI companies to self-regulate, including a framework announced by the White House to which 7 leading AI companies have already agreed: Google, Anthropic, Microsoft, Amazon, OpenAI, Meta, and Inflection.
What will the Frontier Model Forum do?
A Microsoft blog post outlined 4 key areas of research for the Forum:
- Advancing AI safety research: The Forum intends to foster the responsible development of frontier models, minimize potential risks, and facilitate independent, standardized evaluations of AI capabilities and safety.
- Identifying best practices: Geared towards outlining and sharing responsible practices for the development and deployment of frontier models, as well as promoting public understanding of AI technology’s nature, capabilities, limitations, and impacts.
- Fostering collaboration: Members aim to share knowledge about trust and safety risks with policymakers, academics, civil society, and other companies in the industry.
- Addressing societal challenges: The forum supports the development of AI applications to tackle some of society’s major challenges, such as climate change, early cancer detection and prevention, and cybersecurity threats.
The coming months will see the Frontier Model Forum establish an Advisory Board to guide its strategy and priorities. A working group and executive board will establish critical institutional arrangements, including a charter, governance, and funding.
The Forum also intends to consult with civil society and governments in the coming weeks on the design of the Forum and potential collaboration avenues. This includes supporting multilateral initiatives such as the G7 Hiroshima process, the OECD’s work on AI risks, and the US-EU Trade and Technology Council.
Industry leaders weigh in
Executives and researchers from all 4 current members of the Forum have discussed their intent to work together toward AI’s future.
Kent Walker, President of Global Affairs at Google & Alphabet, said, “We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. Engagement by companies, governments, and civil society will be essential to fulfill the promise of AI to benefit everyone.”
Brad Smith, Vice Chair & President at Microsoft, echoed Walker’s sentiment. He emphasized the responsibility of companies spearheading AI development, saying, “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Anna Makanju, Vice President of Global Affairs at OpenAI, highlighted the balance between the benefits and necessary oversight of advanced AI, stating, “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies—especially those working on the most powerful models—align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
Dario Amodei, CEO of Anthropic, affirmed the potential of AI to reshape the world, stating, “Anthropic believes that AI has the potential to fundamentally change how the world works. We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”
Tech companies appear to be laying the groundwork for a new era of extremely intelligent AIs, and things are moving swiftly, as it was only earlier this year that politicians from the EU and US began encouraging immediate industry collaboration.