Regulators are focusing on the real risks of AI, rather than the theoretical ones.

“I’m sorry, Dave, but I’m afraid I can’t do that.” HAL 9000, the killer computer from “2001: A Space Odyssey,” is one of many examples in science fiction of artificial intelligence (AI) outwitting its human creators with deadly consequences. Recent advances in AI, notably the launch of ChatGPT, have put the question of “existential risk” high on the international agenda. In March 2023, a number of tech luminaries, including Elon Musk, called for a pause of at least six months in AI development over safety concerns. At an AI safety summit in Britain last fall, politicians and scientists debated how best to regulate this potentially dangerous technology.

Fast forward to the present, however, and the mood has changed. Fears that technology was advancing too quickly have been replaced by concerns that AI might be less useful, in its current form, than expected, and that tech companies might have overhyped it. At the same time, the rule-making process has led policymakers to recognise the need to address existing problems associated with AI, such as bias, discrimination and intellectual property rights infringement. As the final chapter of our school reports on AI explains, the focus of regulation has shifted from vague, hypothetical risks to specific, immediate risks. This is a positive thing.

For example, AI-based systems that assess people for loans or mortgages and allocate benefits have been found to show racial bias. AI-based recruiting systems that screen resumes appear to favor men. Facial recognition systems used by law enforcement are more likely to misidentify people of color. AI tools can be used to create “deepfake” videos, including pornographic ones, to harass people or misrepresent the views of politicians. Artists, musicians, and news organizations say their work has been used, without permission, to train AI models. And there is uncertainty about the legality of using personal data for training purposes without explicit consent.

The result has been a wave of new laws. For example, the European Union’s Artificial Intelligence Act will ban the use of live facial recognition systems by law enforcement, along with the use of AI for predictive policing, emotion recognition, and subliminal advertising. Many countries have introduced rules requiring AI-generated videos to be labeled. South Korea has banned deepfake videos of politicians in the 90 days before an election; Singapore could follow suit.

In some cases, existing rules will need to be clarified. Both Apple and Meta have said they will not launch some of their AI products in the EU because of ambiguous rules on the use of personal data. (In an online essay for The Economist, Mark Zuckerberg, Meta’s chief executive, and Daniel Ek, Spotify’s boss, argue that this uncertainty means European consumers are being denied access to the latest technology.) And some issues (such as whether using copyrighted material for training purposes is permissible under “fair use” rules) may be decided in court.

Some of these initiatives to address existing problems with AI will work better than others, but they reflect the way in which lawmakers are choosing to focus on the real risks associated with existing AI systems. This is not to say that security risks should be ignored – specific security regulations may eventually be needed – but the nature and extent of future existential risk is difficult to quantify, meaning it is difficult to legislate against now. For proof of this, just look at SB 1047, a controversial law that is making its way through the California state legislature.

Proponents of the law say it would reduce the chance of runaway AI causing catastrophe (defined as “mass casualties” or damage worth more than $500 million) through the use of chemical, biological, radiological or nuclear weapons, or cyberattacks on critical infrastructure. It would require creators of large AI models to comply with safety protocols and incorporate a “kill switch.” Critics say its wording is more science fiction than fact, and that its vague wording would cripple businesses and stifle academic freedom. Andrew Ng, an AI researcher, has warned it would “paralyze” researchers, because they would be unsure how to avoid breaking the law.

Following furious pressure from opponents, some aspects of the bill were watered down earlier this month. Some aspects make sense, such as protections for whistleblowers at AI companies, but for the most part it rests on a quasi-religious belief that AI poses the risk of large-scale catastrophic harm, even though making nuclear or biological weapons requires access to tools and materials that are tightly controlled. If the bill reaches California Gov. Gavin Newsom’s desk, he would have to veto it. As things stand, it’s hard to see how a large AI model could cause death or physical destruction, but there are plenty of ways AI systems can and do already cause nonphysical forms of harm, so lawmakers are, for now, right to focus on those.

© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under license. The original content can be found at www.economist.com

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment