The Biden administration has launched a consortium of over 200 major tech companies, academic institutions, and research groups to collaborate on developing ethical guidelines and safety standards for AI technology. Biden’s “AI Safety Institute Consortium” claims it has the goal of developing ways to “mitigate AI risks and harness its potential.”
Mashable reports that the Biden administration has officially tapped major tech players and other AI stakeholders to address safety and trust in AI development. On Thursday, the U.S. Department of Commerce announced the creation of the AI Safety Institute Consortium (AISIC).
The consortium, housed under the Commerce Department’s National Institute of Standards and Technology (NIST), will follow President Biden’s AI executive order mandates. This includes “developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content,” said Secretary of Commerce Gina Raimondo.
Over 200 participants have joined AISIC so far. This includes AI leaders like ChatGPT developer OpenAI, Google, Microsoft, Apple, Amazon, Mark Zuckerberg’s Meta, and NVIDIA. Academics from MIT, Stanford, Cornell, and others are also participating. Additionally, industry researchers and think tanks like the Center for AI Safety, IEEE, and the Responsible AI Institute are involved.
The consortium stems from Biden’s sweeping executive order seeking to regulate AI development. “The government has a role in setting standards and tools to mitigate AI risks and harness its potential,” said Raimondo.
While the EU has worked on AI regulations, this marks a major U.S. government effort to formally oversee AI. “I understand this is a first step by the administration to work with industry on some of the hard problems. But it is an important first step and I think this is an area where industry and government collaboration is critical,” said Microsoft President Brad Smith.
Overall, AISIC represents unprecedented cooperation between the U.S. government and private sector to ensure AI safety. “Only through such open and honest collaboration can we manage AI in a way that builds trust and protects U.S. values,” said Secretary Raimondo. The initiative provides hope for developing ethical AI that protects privacy and national security.
Read more at Mashable here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.