In a significant move to enhance AI safety, the Biden Administration has launched the US AI Safety Institute Consortium (AISIC). This groundbreaking initiative, termed the “first-ever consortium dedicated to AI safety,” follows the appointment of a top White House aide as the director of the new US AI Safety Institute (USAISI) at the National Institute of Standards and Technology (NIST).
Over 200 Leading Firms Join Forces
The AISIC boasts a robust coalition of more than 200 member companies and organizations. This diverse group includes tech giants such as Google, Microsoft, and Amazon, along with prominent AI firms like OpenAI, Cohere, and Anthropic. The consortium extends its reach to encompass research labs, civil society, academic teams, state and local governments, and nonprofits.
AISIC’s Mission and Focus
The NIST, in a blog post, emphasized that the AISIC represents the most extensive collection of test and evaluation teams to date. It aims to establish the groundwork for a new measurement science in AI safety. The consortium will operate under the USAISI and contribute to President Biden’s Executive Order priorities, covering red-teaming, capability evaluations, risk management, safety, security, and watermarking synthetic content.
Birth of the Consortium: An AI Executive Order
President Biden’s AI Executive Order, unveiled on October 31, 2023, served as the catalyst for the consortium’s formation. Participation in the AISIC is open to all interested organizations willing to contribute their expertise, products, data, or models to the consortium’s activities.
Consortium Guidelines and Participation
Selected participants, requiring an annual fee of $1,000, enter into a Consortium Cooperative Research and Development Agreement (CRADA) with NIST. Consortium members will contribute to developing guidelines in various critical areas, including secure AI deployment, red-teaming, privacy-preserving machine learning, and more.
NIST Funding and Congressional Scrutiny
Despite the significant announcement, details about NIST’s funding for AI safety remain unclear. Questions arise about how the underfunded NIST, with an annual budget of just over $1.6 billion, will support this initiative. Bipartisan requests for $10 million in funding face uncertainties in the Senate Appropriations Committee, raising concerns about the institute’s financial sustainability.
In mid-December, House Science Committee lawmakers criticized NIST for lacking transparency in announcing research grants related to the USAISI. Former AI lead at Accenture, Rumman Chowdhury, highlighted the USAISI’s funding challenge, describing it as an “unfunded mandate” via executive order.
In the quest for comprehensive AI safety, the AISIC emerges as a beacon of collaboration, bringing together industry leaders and experts to pioneer advancements in the ethical and secure deployment of artificial intelligence.