
While the International Committee of the Red Cross (ICRC) has yet to issue a formal policy on killer robots, this piece on the legal, ethical and humanitarian implications of robotic warfare published earlier this week gives some clues as to its thinking and concerns.
In June the ICRC took part in a workshop hosted by the Oxford Institute for Ethics, Law and Armed Conflict on the legal, ethical and humanitarian implications of robotic warfare.
‘Autonomous weapon systems’ (including so-called ‘lethal autonomous robots’) are those that can learn or adapt their functioning in response to changing circumstances in the environment in which they are deployed. Although these systems are yet to be deployed, the rules of international humanitarian law would apply to them. It is difficult to see how such weapons could be made to comply with the IHL rules of distinction, proportionality and precautions in attack, says Nathalie Weizmann, ICRC legal adviser. Furthermore, there is a requirement under IHL to legally review all new technologies of warfare, before they are developed, procured or deployed, to ensure they can be used in a way that is compliant with IHL.
The Special Rapporteur called for the establishment of a high-level panel on LARS to formulate a policy for the international community on the issue. In light of this report, the Oxford meeting canvassed perspectives and opinions from a variety of academic disciplines on concerns surrounding the development and eventual deployment of lethal robots. It is intended that the discussions from the meeting will be used as the basis for policy papers that will contribute to the debate on the LARS issue.
An international campaign launched in April 2013 under the banner Stop Killer Robots and backed by a group of human rights and other organizations, calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons.
Public concerns on LARS are rooted in a fear that they could be unreliable, act arbitrarily, and even that such robots could begin to act against their human designers’ wishes.
The implications of removing a human decision-maker who is able to weigh moral, legal and ethical concerns are wide-ranging and provoke a number of questions. In particular there is the difficulty of determining who would be held responsible should a lethal autonomous robot commit a violation of international humanitarian law. And would LARS have the ability to distinguish armed combatants from civilians, or react with a proportionate degree of force to an attack in a rapidly changing battlefield environment?
Given that some level of human supervision would be required to keep these new technologies in operation, what level of human oversight would be acceptable? If such a robot were to be used to defend against incoming weapons, could an exception be made to the requirement for human supervision? Another major concern is over whether robots would be more likely to resort to force than traditional types of weapons. In what circumstances would the use of such robots be lawful outside of armed conflict, and would it be morally acceptable to allow a lethal autonomous robot to take a human life?
Did you find this story interesting? Please support AOAV's work and donate.
Donate