I saw about this in an article by the Guardian about an open letter by AI and tech industry experts written to the United Nations ahead of discussions on autonomous military robots. This follows a similar letter in 2015 detailed here in Scientific American. Not being really familiar with the issue, and still harbouring that sense of unreality that this must be an ad campaign for a new terminator movie, I'm going to quote from the groups official website at Stopkillerrobots.org so I don't miss anything. Its fairly clear and straight forward, but I've put it in italics to keep the mods happy.
Over the past decade, the expanded use of unmanned armed vehicles has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are resulting in efforts to develop fully autonomous weapons. These robotic weapons would be able to choose and fire on targets on their own, without any human intervention. This capability would pose a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law.
Several nations with high-tech militaries, particularly the United States, China, Israel, South Korea, Russia, and the United Kingdom are moving toward systems that would give greater combat autonomy to machines. If one or more chooses to deploy fully autonomous weapons, a large step beyond remote-controlled armed drones, others may feel compelled to abandon policies of restraint, leading to a robotic arms race. Agreement is needed now to establish controls on these weapons before investments, technological momentum, and new military doctrine make it difficult to change course.
Allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. As a result, fully autonomous weapons would not meet the requirements of the laws of war.
Replacing human troops with machines could make the decision to go to war easier, which would shift the burden of armed conflict further onto civilians. The use of fully autonomous weapons would create an accountability gap as there is no clarity on who would be legally responsible for a robot’s actions: the commander, programmer, manufacturer, or robot itself? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians and victims would be left unsatisfied that someone was punished for the harm they experienced.
Giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology. Human control of any combat robot is essential to ensuring both humanitarian protection and effective legal control. The campaign seeks to prohibit taking a human out-of-the-loop with respect to targeting and attack decisions.
A comprehensive, pre-emptive prohibition on the development, production and use of fully autonomous weapons–weapons that operate on their own without human intervention–is urgently needed. This could be achieved through an international treaty, as well as through national laws and other measures.
The Campaign to Stop Killer Robots urge all countries to consider and publicly elaborate their policy on fully autonomous weapons, particularly with respect to the ethical, legal, policy, technical, and other concerns that have been raised.
We support any action to urgently address fully autonomous weapons in any forum, including the Convention on Conventional Weapons (CCW), which held three meetings in 2014-2016 to discuss questions relating to lethal autonomous weapons systems.
The Campaign to Stop Killer Robots calls on all countries to implement the recommendations of the 2013 report on on lethal autonomous robots by UN Special Rapporteur on extrajudicial, summary or arbitrary executions Professor Chrisof Heyns, which call on all states to:
Do you think that this ban on killer robots is a good idea? Or would you prefer Skynet be in charge of national security? I mean we can trust machines...... can't we?
Over the past decade, the expanded use of unmanned armed vehicles has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are resulting in efforts to develop fully autonomous weapons. These robotic weapons would be able to choose and fire on targets on their own, without any human intervention. This capability would pose a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law.
Several nations with high-tech militaries, particularly the United States, China, Israel, South Korea, Russia, and the United Kingdom are moving toward systems that would give greater combat autonomy to machines. If one or more chooses to deploy fully autonomous weapons, a large step beyond remote-controlled armed drones, others may feel compelled to abandon policies of restraint, leading to a robotic arms race. Agreement is needed now to establish controls on these weapons before investments, technological momentum, and new military doctrine make it difficult to change course.
Allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. As a result, fully autonomous weapons would not meet the requirements of the laws of war.
Replacing human troops with machines could make the decision to go to war easier, which would shift the burden of armed conflict further onto civilians. The use of fully autonomous weapons would create an accountability gap as there is no clarity on who would be legally responsible for a robot’s actions: the commander, programmer, manufacturer, or robot itself? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians and victims would be left unsatisfied that someone was punished for the harm they experienced.
Giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology. Human control of any combat robot is essential to ensuring both humanitarian protection and effective legal control. The campaign seeks to prohibit taking a human out-of-the-loop with respect to targeting and attack decisions.
A comprehensive, pre-emptive prohibition on the development, production and use of fully autonomous weapons–weapons that operate on their own without human intervention–is urgently needed. This could be achieved through an international treaty, as well as through national laws and other measures.
The Campaign to Stop Killer Robots urge all countries to consider and publicly elaborate their policy on fully autonomous weapons, particularly with respect to the ethical, legal, policy, technical, and other concerns that have been raised.
We support any action to urgently address fully autonomous weapons in any forum, including the Convention on Conventional Weapons (CCW), which held three meetings in 2014-2016 to discuss questions relating to lethal autonomous weapons systems.
The Campaign to Stop Killer Robots calls on all countries to implement the recommendations of the 2013 report on on lethal autonomous robots by UN Special Rapporteur on extrajudicial, summary or arbitrary executions Professor Chrisof Heyns, which call on all states to:
- Place a national moratorium on lethal autonomous robots. (Paragraph 118)
- Declare – unilaterally and through multilateral fora – a commitment to abide by International Humanitarian Law and international human rights law in all activities surrounding robot weapons and put in place and implement rigorous processes to ensure compliance at all stages of development. (Paragraph 119)
- Commit to being as transparent as possible about internal weapons review processes, including metrics used to test robot systems. States should at a minimum provide the international community with transparency regarding the processes they follow (if not the substantive outcomes) and commit to making the reviews as robust as possible. (Paragraph 120)
- Participate in international debate and trans-governmental dialogue on the issue of lethal autonomous robots and be prepared to exchange best practices with other States, and collaborate with the High Level Panel on lethal autonomous robotics. (Paragraph 121)
Do you think that this ban on killer robots is a good idea? Or would you prefer Skynet be in charge of national security? I mean we can trust machines...... can't we?