AI regulation: California bill to regulate AI raises alarm in Silicon Valley

TO California Bill that could impose restrictions on artificial intelligence has technology companiesInvestors and activists are scrambling to explain what this first-of-its-kind legislation could mean for their industry in the state.

The bill is still pending in the state capital, Sacramento. It is expected to go to the California State Assembly’s appropriations committee on Thursday before being voted on by the full Assembly.

If signed into law by Gov. Gavin Newsom, the bill would require companies to test the safety of AI technologies before making them available to the public. It would also allow California’s attorney general to sue companies if their technologies cause serious harm, such as massive property damage or loss of life.

The debate over the artificial intelligence bill, dubbed SB 1047, is a reflection of the arguments that have driven the intense interest in artificial intelligence. Opponents believe it will stifle progress on technologies that promise to boost worker productivity, improve health care and combat climate change.

Supporters of the bill believe it will help prevent disasters and put limits on the work of companies that focus too much on profits. Last year, many AI experts and tech executives led public discussions about the risks of AI and even urged lawmakers in Washington to help set those limits.

Now, in a dramatic reversal, the tech industry is reluctant to try to do exactly that in California. Because they are based in or do business in the state, many of the top AI companies, including Google, Meta, Anthropic and OpenAI, would be subject to the proposed law, which could set a precedent for other states and national governments.

Discover the stories that interest you


SB 1047 comes at a precarious time for the San Francisco Bay Area, where much of the AI ​​startup community is based, as well as many of the industry’s largest companies. The bill, its harshest critics argue, could push AI development to other states, just as the region is recovering from a pandemic-induced slump. Some prominent AI researchers have supported the bill, including Geoff Hinton, a former Google researcher, and Yoshua Bengio, a professor at the University of Montreal. The two have spent the past 18 months warning about the dangers of the technology. Other AI pioneers have spoken out against the bill, including Meta’s chief AI scientist Yann LeCun and former Google executives and Stanford professors Andrew Ng and Fei-Fei Li.

Newsom’s office declined to comment. Google, Meta and Anthropic also declined to comment. A spokesperson for OpenAI said the bill could stifle innovation by creating an uncertain legal landscape for AI creation. The company said it had raised concerns in meetings with the office of California state Sen. Scott Wiener, who created the bill, and that serious AI risks were national security issues that should be regulated by the federal government, not states.

The bill has its roots in the “AI salons” held in San Francisco. Last year, Wiener attended a series of such salons, where young researchers, entrepreneurs, activists and amateur philosophers discussed the future of artificial intelligence.

After attending those discussions, Wiener said he created SB 1047, with input from the lobbying group Center for AI Safetya group of experts linked to effective altruism, a movement that has long been concerned with preventing the existential threats posed by AI.

The bill would require safety testing for systems that cost more than $100 million to develop and are trained using a certain amount of raw computing power. It would also create a new state agency to define and oversee such testing. Dan Hendrycks, founder of the Center for AI Safety, said the bill would force the biggest tech companies to identify and root out harmful behavior from their most expensive technologies.

“Complex systems will have unexpected behavior. You can count on that,” Hendrycks said in an interview with The New York Times. “The bill is a call to ensure that these systems do not present dangers or, if they do, that the systems have adequate protections.”

Today’s AI technologies can help spread disinformation online, including through text, still images and videos. They are also starting to eliminate some jobs. But studies by OpenAI and others over the past year showed that current AI technologies were not significantly more dangerous than search engines.

Still, some AI experts say there are serious dangers ahead. For example, Dario Amodei, chief executive of Anthropic, a high-profile artificial intelligence startup, told Congress last year that new AI technology could soon help unskilled people create large-scale biological attacks.

Wiener said he was trying to avoid those scary scenarios.

“Historically, we’ve waited for bad things to happen and then we’ve regretted it and had to deal with it later, sometimes when the horse had already left the barn and it was too late,” Wiener said in an interview. “So my view is that we try to get ahead of the risks in a very gentle way and anticipate them.”

Google and Meta sent letters to Wiener expressing concerns about the bill. Amodei’s company, Anthropic, surprised many observers when it also opposed the bill in its current form and suggested changes that would allow companies to control their own security testing. The company said the government should only intervene if real harm was caused.

Wiener said the tech giants’ pushback sends mixed messages. The companies have already promised the Biden administration and global regulators that they will test the security of their systems.

“The CEOs of Meta, Google and OpenAI have all volunteered to conduct testing and that’s what this bill asks them to do,” he said.

Critics of the bill say they are concerned that the safety rules will add new responsibilities to AI development, as companies will have to make a legal promise that their models are safe before releasing them. They also argue that the threat of legal action by the state attorney general will discourage tech giants from sharing the software code underlying their technology with other companies and software developers, a practice known as open sourcing.

Open source is commonplace in the world of artificial intelligence. It allows small businesses and individuals to piggyback on the work of larger organizations, and critics of SB 1047 argue that the bill could severely limit the options of startups that don’t have the resources of tech giants like Google, Microsoft and Meta.

“It could stifle innovation,” said Lauren Wagner, an investor and researcher who has worked for both Google and Meta.

Open source proponents believe that sharing code allows engineers and researchers across the industry to quickly identify and fix problems and improve technologies.

Jeremy Howard, an entrepreneur and AI researcher who helped create the technologies that power mainstream AI systems, said the new California bill would ensure that the most powerful AI technologies belong only to the largest tech companies. And if these systems ever surpass the power of the human brain, as some AI researchers believe, the bill would consolidate power in the hands of a few corporations.

“These organizations would have more power than any country, any entity of any kind. They would be in control of an artificial superintelligence,” Howard said. “That’s a recipe for disaster.”

Others argue that if open source development is not allowed to flourish in the United States, it will spread to other countries, including China. The solution, they argue, is to regulate how people use AI rather than regulating the creation of the core technology.

“AI is like a kitchen knife, which can be used for good things, like cutting an onion, and for bad things, like stabbing a person,” said Sebastian Thrun, an AI researcher and serial entrepreneur who founded Google’s self-driving car project. “We shouldn’t be trying to put a switch on a kitchen knife. We should be trying to prevent people from misusing it.”

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment