OpenAI: OpenAI and Anthropic sign agreements with the US government for artificial intelligence research and testing

Artificial Intelligence Startups OpenAI and Anthropic have signed agreements with the US government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Security Institute said on Thursday.

The agreements, the first of their kind, come at a time when companies are facing… regulatory scrutiny on the safe and ethical use of AI Technologies.

California legislators They are set to vote on a bill this week to broadly regulate how AI is developed and deployed in the state.

“Safe and trustworthy AI is crucial to the positive impact of technology. Our collaboration with the US AI Safety Institute leverages their extensive expertise to rigorously test our models before widespread deployment,” said Jack Clark, co-founder and chief policy officer at Amazon and Alphabet-backed Anthropic.

Under the agreements, the US AI Safety Institute will have access to major new models from OpenAI and Anthropic before and after their public release.

The agreements will also enable collaborative research to assess the capabilities of AI models and the risks associated with them.

Discover the stories that interest you


“We believe the institute has a critical role to play in defining U.S. leadership in the responsible development of artificial intelligence, and we hope our work together will provide a framework for the rest of the world to build upon,” said Jason Kwon, chief strategy officer at OpenAI, the maker of ChatGPT. “These agreements are just the beginning, but they are an important milestone as we work to help responsibly manage the future of AI,” said Elizabeth Kelly, director of the U.S. AI Safety Institute.

The institute, part of the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), will also collaborate with the U.K.’s AI Security Institute and provide feedback to companies on potential security improvements.

The U.S. AI Safety Institute was launched last year as part of an executive order by President Joe Biden’s administration to assess known and emerging risks to artificial intelligence models.

(Reporting by Harshita Mary Varghese in Bengaluru; Editing by Shinjini Ganguli and Shreya Biswas)

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment