Categories

AOAV: all our reportsManufactured explosive weaponsDronesMilitarism examined

Algorithmic predictions and pre-emptive violence: a review of Professor Anthony Downey’s paper on AI and drones

Artificial Intelligence, Unmanned Aerial Systems, Military Ethics, Pre-emptive Strikes, Algorithmic Bias, Autonomous Warfare, Human Rights, Predictive Technology, Drone Warfare, AI Accountability
Downey explores the future of drone warfare – and the prospect is unnerving

Algorithmic Predictions and Pre-emptive Violence: Artificial Intelligence and the Future of Unmanned Aerial Systems” by Anthony Downey, the Professor of Visual Culture in the Middle East and North Africa, College of Art and Design (Birmingham City University) and publised in ‘Digital War’, provides a critical examination of the integration of Artificial Intelligence (AI) in unmanned aerial systems (UAS) and its implications for pre-emptive military actions. Downey’s analysis is both timely and necessary, addressing the complex interplay between technology and ethics in contemporary warfare.

Downey’s exploration begins with the foundational principle that the predictive nature of AI is inherently aligned with military strategies of pre-emption. He notes, “the deployment of predictive, algorithmically driven systems in unmanned aerial systems (UAS) would therefore appear to be all but inevitable.” This inevitability raises significant ethical concerns, particularly when the precision and reliability of AI predictions are juxtaposed with the irreversible consequences of military strikes.

A critical point of Downey’s argument lies in the recognition of inherent biases within AI systems. He explains, “the systematic training of neural networks (through habitually biased methods of data-labelling)… can and do produce so-called ‘hallucinations’.” These ‘hallucinations’ refer to instances where AI systems misinterpret data or generate erroneous predictions, a risk that becomes significantly more concerning when applied to the context of military decision-making.

Downey also delves into the legal and human rights implications of using AI in military contexts. He argues for the urgency of considering new human rights laws that address the physical and psychological threats posed by autonomous weapons systems. His paper emphasizes the complex nature of accountability in AI-driven military operations, questioning who or what is responsible for decisions made on the basis of algorithmic predictions.

Furthermore, Downey’s paper examines the role of corporations in developing AI technologies for military use. He scrutinises the involvement of companies like Palantir in shaping the future of warfare, highlighting the blurring lines between private technological advancement and public military strategy.

In one of the more stand out sentences, Downey states, “the concern we therefore need to address… is whether the codification and substantiation of future threats in algorithmic models of target validation technically and ideologically collude with the military objective of pre-emption.” This statement encapsulates the crux of the ethical dilemma: the potential for AI to not only predict but also to create a narrative of threat that justifies pre-emptive actions.

In summary, “Algorithmic Predictions and Pre-emptive Violence: Artificial Intelligence and the Future of Unmanned Aerial Systems” is an insightful critique of the current trajectory of military AI development. Downey’s paper is rich with critical analysis and raises important ethical questions that demand attention from policy makers, technologists, and the public.

The paper serves as a significant contribution to the discourse on AI and warfare, emphasising the need for careful consideration of the long-term implications of these rapidly advancing technologies.