This is a review of Christain Enemark’s book ‘Moralities of Drone Violence‘ (Edinburgh University Press, 2023).
On January 3, 2020, Qasem Soleimani, an Iranian general, was killed in Iraq by two missiles fired from a MQ-9 Reaper drone aircraft. The drone strike was authorized by then-President Donald Trump and resulted in the deaths of nine other people, including four Iranian officers and five members of the Iraqi Popular Mobilisation Forces. This event marked the first time an armed drone had been used by one state against a high-level official of another state on the territory of a third state. The rationale offered by the US government for the strike was mixed, with some claiming it was a defensive action to protect US personnel abroad, while others claimed it was an act of punishment for past wrongdoing.
The use of armed drones in this and other situations has raised moral uncertainty and ethical controversies, with different conceptual understandings brought to bear on the violence under consideration. To provide greater clarity for debate purposes, the book “Moralities of Drone Violence” aims to explore and order a variety of ways in which the violent use of an armed drone can be judged as just or unjust. The book organises moral ideas around a series of concepts of ‘drone violence’, including warfare, violent law enforcement, tele-intimate violence, and violence devolved from humans to AI technologies.
To understand the moral implications of drone violence, the book seeks to establish what armed drones are, how they are used, and who uses them. Armed drones are uninhabited aircraft equipped with video cameras and weapons, and they are mainly used for surveillance, reconnaissance, and targeted killing operations. The US has been the most prolific user of armed drones, although other countries, including the UK and Israel, have also used them.
The use of armed drones in situations like the one that resulted in the death of Qasem Soleimani has raised important moral and ethical questions. In his book Christian Enemark set out to provide greater clarity for debate purposes by exploring different ways of judging the violent use of armed drones as just or unjust, and by organizing moral ideas around different concepts of drone violence.
The author finds the proliferation of armed drones has been a significant concern for many governments and international organizations. As drones become more accessible and cost-effective, more countries are acquiring these technologies, and drone use is likely to increase in the future. With this proliferation, drone-using states are likely to continue to improve the capabilities of these aircraft, including the incorporation of AI technologies.
One of the potential advantages of incorporating AI into drone systems is the ability to devolve some functions from remote human operators to on-board AI technologies. The author argues this would allow for some functions, such as aerial manoeuvres or data analysis, to be performed at superhuman speed, reducing the need for personnel to be available for every drone mission. Additionally, this could reduce the importance of maintaining strong and secure communication links between faraway drones and their on-the-ground controllers.
While the author sees operational advantages to incorporating AI into drone systems, he says the ethical implications of doing so must also be considered. One critical ethical issue is who should make decisions about drone violence – humans or AI technologies. The consequentialist argument in favour of artificial moral agency suggests that a future in which armed drones (and other weapon systems) are controlled entirely by AI could result in fewer unjust harms to humans. However, this argument neglects the valuable capacities of humans to make judgments based on lived experience, to disobey rules when morally required, and to bear moral responsibility for wrongdoing.
Thus, to preserve the beneficial effect of these capacities, the author argues the better approach to incorporating AI into weapon systems is to ensure that meaningful human control over violence is maintained. This requires careful consideration of how AI technologies are incorporated into drone systems and what level of human involvement is necessary for making decisions about drone violence.
To help in this consideration, the author states that the “meaningful human control” (MHC) concept can be useful. This concept allows distinctions to be drawn between the technical characteristics and ethical acceptability of different kinds of systems operating under various conditions. Although certain minimal indicators of whether human control is ‘meaningful’ exist, he argues it is also essential to recognise that context matters when making moral judgments about how AI should assist state violence. Standards of meaningfulness can and should differ according to the type of target to be struck, the purpose that this would serve, and the kind of environment into which a weapon system is deployed.
In high-risk circumstances where the potential for unjust harm is significant, stricter modes of human-machine interaction (HMI) may be necessary when selecting and engaging targets. In other situations where the risk of harm is lower, milder HMI modes may be appropriate. And, in some cases, AI assistance in the wielding of drone violence may never be morally permissible due to the risk of unjust harm.
Overall, the book concludes that the incorporation of AI into drone systems requires a careful balance between operational advantages and ethical considerations. To ensure that the use of these technologies is ethically justifiable, meaningful human control over violence must be maintained while also taking into account the context and circumstances of the situation.
Did you find this story interesting? Please support AOAV's work and donate.
Donate