Frontier Model Forum founders comprising Anthropic, Google, Microsoft, and OpenAI have set up a $10 million AI Safety Fund to promote responsible artificial intelligence research.
The Forum also announced the appointment of Chris Meserole as its first Executive Director. This came as the first major announcement from the quad since establishing the Frontier Model Forum industry body in July.
The Frontier Model Forum is an industry body focused on ensuring that any advancements in AI technology and research remain safe, secure, and under human control while identifying best practices and standards. It also aims to facilitate information-sharing among policymakers and the industry.
Why the fund was created
Explaining the purpose of the $10 million AI Safety Fund, the Forum in a statement announcing the initiative, said:
- “Over the past year, industry has driven significant advances in the capabilities of AI. As those advances have accelerated, new academic research into AI safety is required. To address this gap, the Forum and philanthropic partners are creating a new AI Safety Fund, which will support independent researchers from around the world affiliated with academic institutions, research institutions, and startups.
- “The initial funding commitment for the AI Safety Fund comes from Anthropic, Google, Microsoft, and OpenAI, and the generosity of our philanthropic partners and the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmid, and Jaan Tallinn. Together this amounts to over $10 million in initial funding.”
- “The primary focus of the Fund will be supporting the development of new model evaluations and techniques for red teaming AI models to help develop and test evaluation techniques for potentially dangerous capabilities of frontier systems. We believe that increased funding in this area will help raise safety and security standards and provide insights into the mitigations and controls industry, governments, and civil society need to respond to the challenges presented by AI systems.
- “The Fund will put out a call for proposals within the next few months. Meridian Institute will administer the Fund — their work will be supported by an advisory committee comprised of independent external experts, experts from AI companies, and individuals with experience in grantmaking,” the Forum added.
Earlier this year, the members of the Forum signed on to voluntary AI commitments at the White House, which included a pledge to facilitate third-party discovery and reporting of vulnerabilities in their AI systems.
The Forum said the AI Safety Fund as an important part of fulfilling this commitment by providing the external community with funding to better evaluate and understand frontier systems.