AI regulation: Biden administration outlines government ‘guardrails’ for AI tools

president joe Biden On Thursday he signed the first national security memorandum detailing how PentagonIntelligence agencies and other national security institutions should use and protect AI technology, putting “guardrails” on how such tools are used in decisions ranging from nuclear weapons to granting asylum.

The new document is the latest in a series Biden has released to address the challenges of using artificial intelligence tools to speed up government operations, whether detecting cyberattacks or predicting extreme weather conditions, while limiting the most dystopian possibilities. , including the development of autonomous weapons.

But most of the deadlines the order sets for agencies to conduct studies on enforcement or regulation of the tools will go into full effect after Biden leaves office, leaving open the question of whether the next administration will meet them. While most national security memoranda are adopted or marginally modified by successive presidents, it is far from clear how former President Donald Trump would approach the issue if elected next month.

The new directive was announced Thursday at the National War College in Washington by Jake Sullivan, the national security adviser, who led many of the efforts to examine the uses and threats of the new tools. He acknowledged that one challenge is that the U.S. government funds or owns very few of the key AI technologies, and that they evolve so quickly that they often defy regulation.

“Our government took an early and critical role in shaping advances, from nuclear physics and space exploration to personal computing and the Internet,” Sullivan said. “That has not been the case with most of the AI ​​revolution. While the Department of Defense and other agencies funded much of the AI ​​work in the 20th century, the private sector has driven much of the latest decade of progress”.


Biden’s advisers have said, however, that the absence of guidelines on how the Pentagon, the CIA or even the A.I. Department of Justice has impeded development as companies worried about which apps might be legal.

Discover the stories of your interest


“AI, if used appropriately and for its intended purposes, can offer great benefits,” the new memo concludes. “If misused, AI could threaten U.S. national security, reinforce authoritarianism around the world, undermine democratic institutions and processes, facilitate human rights abuses,” and more. These conclusions have now become common warnings. But they are a reminder that it will be much harder to set rules for AI than it will be to create, say, arms control agreements in the nuclear age. Like cyber weapons, AI tools cannot be counted or inventoried, and everyday uses can, as the memo makes clear, go wrong “even without malicious intent.”

That was the theme Vice President Kamala Harris made when she spoke on behalf of the United States last year at international conferences aimed at achieving some consensus on the rules for how the technology would be used. But while Harris, now a Democratic presidential candidate, was tapped by Biden to lead the effort, it was notable that she did not publicly participate in Thursday’s announcement.

The new memo contains about 38 pages in its unclassified version, with a classified appendix. Some of its conclusions are obvious: it excludes, for example, letting artificial intelligence systems decide when to launch nuclear weapons; That is left to the president as commander in chief.

While it seems clear that no one would want the fate of millions to depend on the choice of an algorithm, the explicit statement is part of an effort to draw China into deeper conversations about limits on high-risk applications of AI. An initial conversation with China on the issue, held in Europe last spring, failed to make any real progress.

“This focuses attention on the question of how these tools affect the most critical decisions that governments make,” said Herb Lin, a Stanford University scholar who has spent years examining the intersection of AI and nuclear decision-making. .

“Obviously no one is going to give the nuclear codes to ChatGPT,” Lin said. “But a question still remains about how much information the president receives is processed and filtered through artificial intelligence systems, and whether that is a bad thing.”

The memorandum requires an annual report to the president, prepared by the Department of Energyon the “radiological and nuclear risk” of “frontier” AI models that can facilitate the assembly or testing of nuclear weapons. There are similar deadlines for periodic classified assessments of how AI models could make it possible to “generate or exacerbate deliberate chemical and biological threats.”

It is the last two threats that most concern weapons experts, who point out that obtaining materials for chemical and biological weapons on the open market is much easier than obtaining bomb-grade uranium or plutonium, necessary for nuclear weapons.

But the rules for non-nuclear weapons are more confusing. The memo builds on previous government mandates aimed at keeping human decision-makers “in the loop” on targeting decisions or overseeing artificial intelligence tools that can be used to select targets. But those mandates often slow response times. This is especially difficult if Russia and China begin to make greater use of fully autonomous weapons that operate at breakneck speeds because humans are excluded from battlefield decisions.

The new barriers would also prohibit allowing artificial intelligence tools to make a decision on granting asylum. And they would prohibit tracking someone by ethnicity or religion, or classifying someone as a “known terrorist” without a human being intervening.

Perhaps the most intriguing part of the order is that it treats private sector advances in AI as national assets that must be protected from espionage or theft by foreign adversaries, much as the first nuclear weapons were. The order requires intelligence agencies to begin protecting work on large language models or the chips used to fuel their development as national treasures, and to provide private sector developers with up-to-date intelligence to safeguard their inventions.

Empowers a new and still obscure organization, the AI ​​Safety Institute, located within the National Institute of Standards and Technology, to help inspect AI tools before they are released to ensure they cannot help a terrorist group build biological weapons or help a hostile nation like North Korea improve the accuracy of its missiles.

And it details efforts to bring the best AI specialists from around the world to the United States, just as the country sought to attract nuclear and military scientists after World War II, rather than risk having them work for a rival like Russia.

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment