The Dawn of Safe AI: U.S. Launches Artificial Intelligence Safety Institute

Well, isn’t this a breath of fresh air? The Biden-Harris administration, in a move that screams ‘we’re serious about AI’, has announced the establishment of the U.S. Artificial Intelligence Safety Institute (AISI). And who’s going to run this show? None other than the Department of Commerce’s National Institute of Standards and Technology (NIST).

The AISI’s mission is as grand as it is vital. It’s been tasked with the development of safety, security, and testing standards for AI models. Not only that, it will also develop standards for authenticating AI-generated content through watermarking. And just to make things more interesting, it will provide testing environments for researchers to evaluate emerging AI risks and address known impacts.

Now, you might be thinking, ‘that’s a lot for one institute to handle’. But fear not! The NIST-led team plans to leverage outside expertise. They will be working with partners in academia, industry, government, and civil society to advance AI safety. They also plan to work with similar institutes in allied and partner nations to align and coordinate work in this sphere.

During her policy speech on AI in the UK, Vice President Kamala Harris voiced the administration’s belief that all leaders from government, civil society, and the private sector have a duty to ensure that AI is adopted and advanced in a way that protects the public from potential harm. She also announced the establishment of the United States AI Safety Institute, which will create rigorous standards to test the safety of AI models for public use.

Vice President Harris also announced six additional AI initiatives during her visit to the UK, highlighting the administration’s commitment to advance the safe and responsible use of the emerging technology. Among these initiatives, the Office of Management and Budget’s (OMB) first-ever draft policy guidance for U.S. Federal government use of AI is particularly notable.

Speaking of the global impact of AI, the vice president announced that 31 nations have joined the United States in endorsing its Political Declaration on the Responsible Military Use of AI and Autonomy. The declaration establishes a set of norms for responsible development, deployment, and use of military AI capabilities.

In an effort to protect the most vulnerable in the U.S., the Biden-Harris administration will also launch an effort to detect and block AI-driven fraudulent phone calls. The White House will host a virtual hackathon, inviting companies to submit teams of technology experts focused on building AI technologies, to build AI models that can detect and block unwanted robocalls and robotexts.

Finally, it was announced that ten leading foundations have collectively committed more than $200 million in funding toward initiatives to advance AI. The foundations are forming a funders network to coordinate new philanthropic giving to advance work organized around ensuring AI protects democracy and rights, driving AI innovation in the public interest, empowering workers to thrive amid AI-driven changes, improving transparency and accountability of AI, and supporting international rules and norms on AI.

Our content is enriched by a variety of data from different sources. We appreciate the information available through public web sites, databases and reporting from organizations such as:

Latest articles

Related articles