AI governance: Use of AI in weapons systems is ‘morally repugnant’, says UN Secretary of State’s envoy-general

The advisory body of the Secretary-General of the United Nations (UN) on artificial intelligence (AI) considers that the use of AI in weapons systems “that can make life-or-death decisions is not only morally repugnant” but also “contrary to existing obligations, international humanitarian law and the law of armed conflict,” it said. Amandeep Singh Gillhe UN Secretary GeneralThe technology envoy.

At a press conference on Thursday evening at UN headquarters in New York, he said the use of AI in the military domain was discussed in the advisory body on AI. “This includes the clear call by the secretary-general to ban weapons systems that can make life-or-death decisions. In our view at the UN, that would not only be morally repugnant, but would also run counter to existing obligations, international humanitarian law and the law of armed conflict,” he said.

Gill is a member of the advisory body set up by the UN secretary-general, as is Sharad Sharma, founder of Indian technology think tank iSPIRT.

On Thursday, the advisory body published a report titled ‘Governing AI for Humanity‘, which presents a comprehensive framework and seven recommendations to guide the impact of AI, while safeguarding human rights and ensuring that the benefits of AI are equitably distributed. The seven recommendations include the creation of an international scientific panel on AI, a policy dialogue AI GovernanceAI standards exchange, capacity development network, global AI fund, global AI data framework and an AI office within the Secretariat.

The UN Secretary-General established an advisory body on AI in October 2023, comprising 39 diverse global leaders from the public sector, private sector, civil society, scientists, technologists and public policy experts, who participated in their personal capacity, not as representatives of their respective organizations.


The report notes that AI applications for law enforcement and border controls are growing and raise concerns about due process, surveillance and lack of accountability for States’ commitments to human rights standards, enshrined in the Universal Declaration of Human Rights and other instruments.

Discover the stories that interest you


Challenges posed by the use of AI in the military include new arms races, lowering the threshold of conflict, blurring the lines between war and peace, the proliferation of non-state actors, and the derogation of long-established principles of international humanitarian law, such as military necessity, distinction, proportionality and limiting unnecessary suffering, the report said. “For legal and moral reasons, decisions to kill should not be automated by AI,” the report said. The report added that states should commit to refrain from deploying and using Military applications of AI in armed conflicts in ways that do not fully comply with international law, including international humanitarian law and human rights law.

Source link

Disclaimer:
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.

Leave a Comment