In September 2013, PRIO and the Norwegian Centre for Humanitarian Studies hosted the breakfast seminar “Killer Robots: the Future of War?”. The goal of the seminar was to contribute to the public debate on autonomous weapons, and identify key ethical and legal concerns relating to robotic weapon platforms.
The event was chaired by Kristin B. Sandvik (PRIO), and the panellists were Alexander Harang (Director, Fredslaget), Kjetil Mujezinovic Larsen (Professor of Law, Norwegian Centre for Human Rights, UiO) and Tobias Mahler (Postdoctoral Fellow, Norwegian Research Center for Computers and Law, UiO).
Based on the panel discussion, the following highlights the prospects of banning autonomous weapons and legal and ethical challenges in light of current technological development.
Killer robots and the case against them
As a result of technological advancement autonomous weapon platforms, or so-called lethal autonomous weapons (LAR), may well be on the horizon of future wars. Such development, however, raises legal and ethical concerns that need discussion and assessment. Chairing the seminar, Kristin Bergtora Sandvik, highlights that such perspectives are absent in current political debates in Norway, and points out that “autonomous weapons might not be at your doorstep tomorrow or next week, but they might be around next month, and we think that it is important that we begin thinking about this, begin understanding what this is actually about, and what the complications are for the future of war.”
Killer robots are defined as weapon systems that identify and attack without any direct human control. As outlined in the Human Rights Watch Losing Humanity Report, unmanned robotic weapons can be divided into three categories. First, human controlled systems, or human inthe loop systems, are weapon systems that can perform tasks delegated to them independently, but where humans are in the loop. This category constitutes the currently available LAR technology. Second, human supervised systems, or human on the loop systems, are weapon systems that can conduct targeting processes independently, but theoretically remain on the real-time supervision of a human operator who can override these automatic decisions. Third, fully autonomous systems, or the human out of the loop systems, are weapon systems that can search, identify, select and attack targets without any human control.
Alexander Harang highlights four particular issues when using such weapon systems. Firstly, killer robots may potentially lower the threshold of armed conflict. As Harang emphasizes, “it is easier to kill with a joystick than a knife”. Secondly, the development, deployment and use of armed autonomous unmanned systems should be prohibited, as machines should not be allowed to make the decision to kill people. Thirdly, the range and deployment of weapons carried out by unmanned systems is threatening to other states and should therefore be limited. Fourthly, that the arming of unmanned weapon platforms with nuclear weapons should be a banned.
As a response to these challenges, the Campaign to Stop Killer Robots urgently calls upon the international community to establish an arms control regime to reduce the threat posed by robotic systems. More specifically, the Campaign calls for an international agreement to prohibit fully autonomous weapon platforms. The Campaign is an international coalition of 43 NGOs based in twenty countries, supported by eight international organisations, a range of scientists, Nobel laureates and regional and national NGOs. The Campaign has already served as a forum for high-level discussion. So far, 24 states at the UN Human Rights Council have participated in talks. The Campaign has also brought these demands further at the 2013 meeting on the Convention on Certain Conventional Weapons (CCW), where more than 20 state representatives participated. Harang emphasizes that “the window of opportunity is open now, and [the issue] should be addressed before the military industrial complex proceeds with further development of these weapon systems.”
Finally, Harang notes the difficulties in establishing clear patterns of accountability in war. Who is responsible when a robot kills in the battlefield? Who is accountable in the event of malfunction where an innocent civilian is killed? In legal terms, it is unclear where the responsibility and accountability lies, and whether this is somewhere in the military chain of command or with the software developer. One thing is certain: the robot cannot be held accountable or be persecuted if IHL is violated.
The legal conundrum
Although unmanned robotic technology is developing rapidly, there is a slow evolution on the laws which governs these matters. In the legal context it is important to assess how autonomous weapon systems exist and conform to existing legislation; may it be international humanitarian law, human rights law or general international law. Harang emphasizes that this technology also challenges arms control regimes and the existing disarmament machinery. In particular, this issue raises concerns with regards to humanitarian law, in which distinction between civilian and combatants in war is a requirement. Addressing such legal concerns, Kjetil Mujezinovic Larsen reflects on how fully autonomous weapons can be discussed in light of existing international humanitarian law. Larsen sets out some legal premises for discussion on whether such weapons are already illegal and whether they should be banned or not.
Under IHL, autonomous weapon platforms can either be inherently unlawful or potentially unlawful. Such weapons can then be evaluated with considerations to two particular principles of IHL, namely that of proportionality and distinction. Inherently unlawful weapons are always prohibited. Some weapons are lawful, but might be used in an unlawful manner. Where do autonomous weapons fit?
Larsen explains that unlawful weapons are weapons that, by construct, cause superfluous injury or unnecessary suffering, such as chemical and biological weapons. As codified under IHL, such weapons are unlawful with regards to the principle of proportionality, for the protection of combatants. This prohibition does not immediately apply to autonomous weapons, because it is concerned with the effect of the weapons on the targeted individual, not with the manner of engagement. The concern with autonomous weapons lies precisely in the way they are deployed. So, if autonomous weapons are used to deploy chemical, biological or nuclear weapons, then they would clearly be unlawful.
Furthermore, as outlined in IHL, any armed attack must be targeted at a military target. This is to ensure that the attack distinguishes between civilians and combatants. If a weapon is incapable of making that discrimination, it is inherently unlawful. Due to the inability of robots to discriminate between civilians and combatants, using them would imply uncontrollable effects. Thus, such weapons are incapable of complying with the principles of distinction, which is fundamental in international humanitarian law.
The Human Rights Watch’s Losing Humanity Report states that “An initial evaluation of fully autonomous weapons shows that even with the proposed compliance mechanisms, such robots would appear to be incapable of abiding by the key principles of international humanitarian law. They would be unable to follow the rules of distinction, proportionality, and military necessity”. However, as Christof Heyns states in his report to the Human Rights Council “it is not clear at present how LARs could be capable of satisfying IHL and IHRL requirements [.]”
As Larsen highlights, the question of compliance is a big controversy in the legal sphere. From one legal viewpoint, the threshold for prohibiting weapons is rather high. Hard-core IHL lawyers will say that prohibition will only apply if there are no circumstances whatsoever where an autonomous weapon can be used lawfully. For example, there are defensive autonomous weapons that are programmed to destroy incoming missiles. Autonomous weapons are also used to target military objectives in remote areas where there is no civilian involvement. Under these circumstances, autonomous weapons do not face the problem of distinction and discrimination. However, the presumption of civilian status in IHL states that in case of doubt as to whether a civilian or an individual is a combatant or a civilian, he or she should be treated as a civilian. Will technology be able to make such assessments and take precautions to avoid civilian casualty? How can an autonomous weapon be capable of doubt, and act on doubt?
In addition to such legal concerns, Larsen also discusses a range of ethical and societal concerns. Some argue that autonomous weapons will make it easier to wage war, because there is less risk of death and injury to own soldiers. Such technology can also make it easier for authoritarian leaders to suppress their own people, because the risk of a military coup is reduced. Furthermore, using autonomous weapons increase the distance between the soldier and the battlefield, and make human emotions and ethical considerations irrelevant. The nature of warring would change, as robots cannot show compassion or mercy.
On the other hand, some scholars argue that such weapons may be advantageous in terms of IHL. Soldiers, under psychological pressure and steered by emotions, can choose to disobey IHL. An autonomous weapon would not have the reason or capacity to snap, and robots may achieve military goals with less violence. This is based on the argument that soldiers can kill in order to avoid being killed. As robots would not be subject to such a dilemma, it could be easier for them to capture and not kill the enemy.
Potentially, autonomous weapons can make the use of violence more precise, leading to less damage and risk for civilians. This, however, requires a substantial development of software. Throughout history, weapons have always been a passive tool that humans have actively manipulated to achieve a certain purpose. Larsen suggests that if active manipulation is taken out of the equation, perhaps autonomous weapons cannot be considered as weapons in the IHL sense. Perhaps the IHL is as such insufficient to resolve the legal disputes about LAR. This would call for the establishment of new laws and regulations to outline the issue of accountability. Alternatively, a ban could resolve the dispute of the level of unlawfulness, by constituting them as inherently unlawful. Regardless, Larsen emphasizes the urgent need of a comprehensive and clear legal framework, particularly due to the rapid technological development in this field. Larsen also notes that lawyers have to defer to technology experts to define whether such technology can comply with current legal frameworks.
Due to technological advancement, Tobias Mahler argues that it is realistic to expect automated and autonomous technology to be implemented in all spheres of society in the near future. In this context, how realistic is a ban of killer robots? Mahler views the chances to be slim, and foresees a technological domino effect, implying that once some states acquire autonomous robots other states are expected to follow. From a technological and military perspective, the incentives for doing so are fairly strong.
In addition to the conventional features of LARs, such as surveillance equipment, robustness and versatility, robots can also be programmed to communicate with each other. This would imply programming different vehicles to share and exploit the information they collect, advancing the strategic approach to finding and attacking targets. Such communication between machines is already used in civilian technology such as autonomous vehicles, and is also assumed to be in use in the military complex. Such development and advanced of military technology is not presented to the public, due to strategic and security considerations. Thus, the technological opportunities of LARs are immense for the military sector.
Mahler emphasizes that although the military hardware may look frightening, the real threat lies in the algorithms of the software determining the decisions that are made. It is the software that controls the hardware and makes decisions concerning human lives. Robots rely on human specifications on what to do through software. Due to limitations of what programmers can specify, software development is prone to shortcomings and challenges. How do we deal with the artificial intelligence of autonomous robots?
Software malfunctions as well as hacking are problems in all spheres where technology is used. In a future comprised by technology any device could cause potential harm for civilians. In this context, Mahler suggests that there is still not full clarity to what a killer robot is. Questioning the relative lethality of autonomous weapons, he suggests that “in 20 year, when everything will be autonomous, you might be killed by a door.” However, he points out that the concerns related to autonomous weapon systems should be ignored or avoided. This argument simply points to that such challenges are present in both the civilian context and the military context.” Nevertheless, it is unclear who the responsible party would be when using killer robots.
Other concerns raised by Mahler regard whether LAR technology differs from other types of weapon technology, and may change the nature of war. In a war situation, would soldiers prefer to be attacked by another soldier, or a killer robot? How will the dehumanization of war impact soldiers and the public? It is correct to assume that soldiers would prefer to fight with other soldiers? A soldier in a combat situation could make an ethical consideration and show mercy, contrary to robots. However, there is not much evidence which suggests that mercy is commonly used among soldiers. On the other hand, governments could gain great public support by promoting LARs as a means to limiting loss of soldiers. As Mahler states, “people are really concerned about loss of lives of their soldiers, and if there is any way to protect them, then one might go that way.”
One of the questions that remain unanswered is whether software-developers are able to program software sufficiently advanced for autonomous war machines. One way of dealing with such concerns would be to develop robots that comply with IHL. Mahler ponders whether a pre-emptive ban may be too late in light of the current technological development. Perhaps the aim should be to regulate the robots and artificial intelligence in a way so they comply with the current legislation.
In this regard, Mahler points out the need for further development of the current conceptual framework of war and the law of armed conflict. Perhaps the current concepts used in IHL may be insufficient for the future of war. For instance, in a situation where robots are fighting robots, who are considered to be combatants under IHL? Is it the software programmer or the president who decided to send out the killer robot? Future technology could perhaps be able to distinguish between civilians and combatants using face recognition or iris scans. For now, however, this issue remains unresolved.
Regardless of technological inevitability, further discussion on this issue is necessary. Legal, ethical and societal challenges must be identified, and the means to solve these challenges must be specified. Addressing these issues is important in order to curb unintended humanitarian consequences and implications in the future. Perhaps these consequences may be avoided through a ban on LAR system or that current concepts of IHL need to be broadened in order to tackle legal shortcomings. Maybe software developers will one day be able to write programs that comply with IHL. Nevertheless, it is important to discuss and address these issues based on present knowledge and tools we have in place. The future of war is still not determined.
United Nations General Assembly – Human Rights Council (2013) “Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns”. Available at: http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf
Human Rights Watch (2012) “Losing Humanity Report”. Available at: http://www.hrw.org/node/111291/section/1
Campaign to Stop Killer Robots (2013) “Who we are”. Available at: http://www.stopkillerrobots.org/coalition
Did you find this story interesting? Please support AOAV's work and donate.