Killer Robots

Killer Robots and the Rule of Law


Killer Robots: the law must keep up with the march of technology

A new type of arms race is underway and its outcome will shape the future of our planet. This race is not one between two countries. It is between the “tortoise”; of our slowly changing legal and institutional norms and the “hare” of rapid technological change in the arms industry.

The case of lethal autonomous weaponry–what many now call killer robots–offers a classic example of this larger challenge.

All civilisations have had to adapt to technological change. The horrible wars of the 19th and of course the 20th Century underscored not just the lethality of modern weaponry, but also their tragic effects upon civilian populations.

This has inspired efforts to strengthen international humanitarian law (IHL), which establishes that parties to a conflict do not have an unlimited choice of methods and means of warfare. Other norms that evolved to protect victims of armed conflict include a 1977 amendment to the Geneva Convention, which requires States to determine whether use of a new weapon system would be prohibited under any other rule of international law.

Fueled by popular images of rampaging killer robots, concerned citizens in many countries are paying close attention to the issue of fully autonomous weapons that are capable of identifying and firing at targets literally on their own. This is a step well beyond today’s drones, which–while unmanned–remain under human remote control.

Reports of the development of such weapons have started to generate a serious discussion of the ethical, moral, and legal ramifications of their use. A recent meeting of the UN Human Rights Council on lethal autonomous robotics clearly identified the stakes involved.

The discussion was held on the basis of the report of the UN’s Special Rapporteur on extrajudicial, summary or arbitrary executions, Mr. Christof Heyns. In his report, Mr. Heyns questioned how these weapons would be capable of satisfying the requirements of the rules of warfare, namely international human rights and humanitarian law.

He concluded that it is unlikely that autonomous weapons would possess qualities necessary for compliance with IHL, including human judgment, free will, or understanding of intentions behind actions. The report also noted that lethal autonomous robotics may lower the threshold for States going to war and would likely lead to a proliferation of these systems as States transfer and sell them. There would also be the risk of such weapons being intercepted and used by non-State actors such as criminal cartels.

As recognized in the discussion at the Human Rights Council, the UN’s disarmament forums are uniquely suited to address a number of the implications raised by these weapons. In particular, the General Assembly, through its First Committee, may consider principles and make recommendations on both disarmament and the regulation of armaments. The killer robot challenge requires some global standards and the UN offers an indispensable common forum for Governments to adopt measures to mitigate and eliminate risks of unacceptable harm.

Some argue that since autonomous weapons have not been deployed, it is premature to take action. Yet we need not wait for a weapon system to emerge fully before appropriate action can be taken to understand its implications and mitigate and eliminate unacceptable risks.

This approach led to the ban on blinding laser weapons. Indeed, the first resolution of the General Assembly in 1946 established a commission to draft proposals for the elimination of nuclear weapons, well before such weapons were fully integrated in the security policies of any country. It is also far easier to ban or control particular weapons before an entire military/industrial complex emerges to perpetuate their production and sale.

So the time is ripe for both Governments and civil society to conduct a thorough examination of the political, legal, technical and ethical implications of such weapons. In the meantime, States should exercise utmost restraint and maximum transparency in their activities related to these weapon systems, including development and testing.

To help guide this wider public debate, we should all keep in mind “The Three Laws of Robotics” put forward by Isaac Asimov back in 1942. The fundamental norm was that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Asimov’s precepts remain surprisingly relevant today. They offer the outlines of a code conduct that countries could incorporate into artificial intelligence--and human intelligence--to ensure that military robotics fully comply with the laws of war.

Killer robots are rapidly moving from the realm of science fiction to technological fact. But let’s not forget that in Aesop’s fable, the tortoise won the race.