Anthropic, Google, Microsoft and OpenAI launched the Frontier Model Forum in July and have now committed $10m to an AI Safety Fund to support the initiative.
The Frontier Model Forum was initiated with the aim of setting AI safety standards and evaluating frontier models to ensure responsible development. It looks like the group may finally be getting down to business as they announced Chris Meserole as its first Executive Director
Meserole has solid AI credentials having recently served as Director of the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution.
In a statement regarding his new appointment, Meserols said, “The most powerful AI models hold enormous promise for society, but to realize their potential we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum.”
The forum’s mission is to:
- Advance AI safety research to promote responsible development of frontier models and minimize potential risks
- Identify safety best practices for frontier models
- Share knowledge with policymakers, academics, civil society and others to advance responsible AI development
- Support efforts to leverage AI to address society’s biggest challenges.
The AI Safety Fund was set up to fund the initiative with an initial amount of $10m coming from Anthropic, Google, Microsoft, and OpenAI as well as some other philanthropists.
Today, we’re launching a new AI Safety Fund from the Frontier Model Forum: a commitment from @Google, @AnthropicAI, @Microsoft and @OpenAI of over $10 million to advance independent research to help test and evaluate the most capable AI models. ↓ https://t.co/TUhrKQWKc1
— Google DeepMind (@GoogleDeepMind) October 25, 2023
One of the stated core objectives of the Frontier Model Forum is “collaborating across sectors”. Something that may hamper their ambitions for collaboration is the notable absence of Meta.
Earlier this year the Forum signed on to voluntary AI commitments at the White House. Even in the absence of Meta, there may still be sufficient weight behind the group to set AI standards that could become more widely adopted.
How these standards eventually make it into legislation remains to be seen. With dissension surrounding key areas like training data or potential risks associated with open-source models, it’s going to be hard to find consensus.