OpenAI and Anthropic to collaborate with the US AI Safety Institute

The U.S. government announced agreements with leading artificial intelligence startups OpenAI and Anthropic to help test and evaluate their upcoming security technologies.

Under the agreements, announced Thursday, the U.S. AI Safety Institute will receive early access to the companies’ leading new AI models to assess capabilities and risks, as well as collaborate on methods to mitigate potential problems. The AI ​​Safety Institute is part of the Commerce Department’s National Institute of Standards and Technology, or NIST. The agreements come at a time when there has been a growing push to mitigate potentially catastrophic risks from AI through regulation, such as California’s controversial AI safety bill SB 1047, which recently passed the state Assembly.

“Safety is essential to driving breakthrough technological innovation,” said Elizabeth Kelly, director of the AI ​​Safety Institute, in a statement. “These agreements are just the beginning, but they are an important milestone in our work to help responsibly manage the future of AI.”

The group will work closely with the UK AI Safety Institute Anthropic has announced that it will conduct safety testing to provide feedback on potential security improvements. Anthropic previously tested its Sonnet 3.5 model in coordination with the UK’s AI Safety Institute ahead of the technology’s launch. The US and UK organizations have previously said they will work together to implement standardized testing.

“We strongly support the mission of the US AI Safety Institute and look forward to working together to inform best practices and safety standards for AI models,” Jason Kwon, OpenAI’s chief strategy officer, said in a statement. “We believe the institute has a critical role to play in defining US leadership in the responsible development of artificial intelligence and hope that our work together will provide a framework upon which the rest of the world can build.”

Anthropic also said it was important to develop the capacity to effectively test AI models. “Safe and trustworthy AI is crucial to the positive impact of technology,” said Jack Clark, co-founder and chief policy officer at Anthropic. “This strengthens our ability to identify and mitigate risks, driving responsible AI development. We are proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI.”

The U.S. AI Safety Institute was established in 2023 as part of the Biden-Harris administration’s Executive Order on AI, and is tasked with developing tests, assessments, and guidelines for responsible AI innovation.

Read also: OpenAI says weekly ChatGPT users have grown to 200 million

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment