OpenAI and Anthropic Partner with US Government to Ensure AI Safety
Washington, D.C. – In a significant step towards responsible AI development, leadingartificial intelligence companies OpenAI and Anthropic have agreed to provide the US government with access to their new AI models before public release. This partnership, announced by theUS AI Safety Institute on Thursday, aims to enhance the safety and security of these powerful technologies.
Under the memorandum of understanding signed by both companies, the USgovernment will be granted access to the models for evaluation and testing, both before and after their public launch. This collaboration will enable the government to assess potential risks and mitigate any potential issues that may arise. The institute will also work with its counterpartin the UK to provide feedback on safety improvements.
We strongly support the mission of the US AI Safety Institute and look forward to collaborating with them to develop safety best practices and standards for AI models, said Jason Kwon, OpenAI’s Chief Strategy Officer. We believe that the Institute plays a critical role in ensuring that the US leads in the responsible development of AI. We expect that our work with the Institute will provide a framework that can be adopted globally.
Anthropic, known for its focus on AI safety, echoed OpenAI’s commitmentto responsible development. Jack Clark, the company’s co-founder and head of policy, emphasized the importance of effective testing for AI models. Ensuring that AI is safe and reliable is critical to enabling the technology to have a positive impact, Clark stated. Through testing and collaboration, we can better identify andmitigate the risks posed by AI, driving responsible AI development. We are proud to be part of this important work and hope to set new standards for AI safety and trustworthiness.
This move comes as both federal and state lawmakers grapple with how to regulate AI without stifling innovation. Earlier this week, California lawmakers passed the Frontier Artificial Intelligence Model Safety Innovation Act (SB 1047), which would require AI companies in the state to implement specific safety measures before training advanced foundation models. This legislation faced opposition from AI companies like OpenAI and Anthropic, who argued it could harm smaller, open-source developers. The bill,however, has been amended and awaits Governor Gavin Newsom’s signature.
Meanwhile, the White House has been actively seeking voluntary commitments from major AI companies regarding safety measures. Several leading AI companies have already made non-binding pledges to invest in cybersecurity and discrimination research, as well as watermarking AI-generated content.
Elizabeth Kelly, director of the US AI Safety Institute, highlighted the significance of these agreements. These new agreements are just the beginning, but they are an important milestone in our efforts to help responsibly manage the future of AI, Kelly said in a statement.
The collaboration between OpenAI, Anthropic, and the USgovernment marks a crucial step towards ensuring the safe and responsible development of AI. By providing early access to AI models, the government can proactively identify and address potential risks, fostering public trust and confidence in this rapidly evolving technology. This partnership sets a precedent for other countries and organizations to follow, paving the way for a future whereAI benefits humanity while mitigating potential harms.
Views: 0