This is a review of a talk by Anthony Downey, Professor of Visual Culture in the Middle East and North Africa at Birmingham City University. In it he argues that Artificial Intelligence (AI), often seen as an objective technology, actually perpetuates Western dominance and neo-colonial violence. This is evident in the use of AI in Unmanned Aerial Vehicles (UAVs) and Lethal Autonomous Weapons systems (LAWs), where algorithms play a role in determining life and death in conflict zones. The development of autonomous imaging technologies has, he argues, also contributed to this issue. To address these concerns, Downey claims it is important to examine how colonial technologies, such as mapping and aerial photography, have influenced machine vision. And how the delegation of decision-making to algorithms not only highlights the disposability of subjects but also reveals a connection between colonial representation technologies and unaccountable systems.
Professor Downey lucidly spoke on the theme of ‘Decolonising Machine Vision: Algorithmic Anxieties and Epistemic Violence‘ in March 2023 in Ljubljana, touching on the perpetuation and dangers inherent of Western power projected through the use of AI in unmanned aerial vehicles and lethal autonomous weapon systems. His presentation highlighted the ways in which AI not only perpetuates colonial technologies of vision but also creates new regimes of visibility and invisibility.
Downey’s discussion focused on the use of AI in targeted killings, particularly in the context of drone strikes in the Middle East. He referenced several examples, including the assassination of a top Iranian nuclear scientist and the killing of a UN health worker and his family in Afghanistan. He argued that this process of data extraction has a historical lineage that can be traced back to colonial technologies of vision, such as Napoleon’s colonization of Egypt and the development of cartographic mapping and photogrammetry. These technologies, along with the evolution of aerial surveillance, have been used to manage and predict future events on the ground.
However, Downey posits, the predictive function of AI also summons or hallucinates threats into being and colludes with military pre-emption to create a phantasm of “Oracle space” where a military threat is always present on the horizon. The epistemic structures, knowledge structures, and taxonomic structures of AI are used to affect real violence in the real world. The assassination of Iranian General Qasem Soleimani in a strike by a Predator drone, for instance, is just one example of how AI data, machine learning, deep learning systems, biometric data analysis, potential social media review, and human intelligence conjoined to predict the future movement or threat level of an individual.
Downey argued that this algorithmically rationalized apparatus derives from and perpetuates the colonial imperative of extractivism – now taking its neocolonial form through data extraction as a means of maintaining disciplinary control. Aimé Césaire’s ‘colonization as thingification’ is thus reproduced in our time as ‘colonization as datafication’.
Furthermore, Downey notes that the development of algorithms and their devolution of ocular-centric sight, rooted in colonial and Orientalist technologies of vision, have produced operational images that are made by machines, for machines: not things made for the human eye.
He argues that in devolving our sight and responsibility for the impact and legacy of algorithmic apertures, we are also devolving responsibility for data-driven apparatuses of killing. Downey also highlights the first known use of a remote-controlled unmanned ground vehicle to effect a targeted killing and the first evidence of a fully autonomous lethal weapon being used in a theatre of war. This was an instance buried in a UN report that indicate a lethal autonomous weapons system (LAWS), the STM Kargu-2 attack drone, may have been used in a March 2020 conflict in Libya, potentially marking the first known case of AI-based autonomous weapons causing casualties.
The Kargu-2, manufactured by the Turkish company STM, is capable of operating both autonomously and manually, utilising machine learning and real-time image processing to engage targets. During the conflict, logistics convoys and retreating forces affiliated with General Khalifa Haftar were pursued and remotely engaged by the unmanned combat aerial vehicles or the LAWS. The UN report stated that the LAWS were programmed to attack targets without requiring data connectivity between the operator and the munition, effectively implementing a “fire, forget, and find” capability. Although the report does not explicitly confirm any casualties caused by the LAWS, Downey sees this development as raising profound ethical concerns and questions about the use of AI in warfare.
Downey also discusses the implications of AI in the future of warfare and the need for critical engagement and regulation to prevent the perpetuation of Western power and neo-colonial violence. For instance, he discusses how the use of drones, particularly the MQ-9 Reaper drone, in warfare, and the use of artificial intelligence (AI) in these systems presents a deep and often under-examined problem. Drones operate as black box systems, with military and industrial proprietary contractual obligations to keep their operations secret.
Such secrecy has already been noted by AOAV, which found – for instance – a systemic reluctance of the UK’s Royal Air Force (RAF) to examine civilian casualties in its drone strikes in any critical manner. For instance, the UK Ministry of Defence not only refuses to say how many cities it has bombed in Iraq and Syria, but it refuses to record how many civilians it has killed in its military operations. In reference to the contractual obligations surrounding the black box system, AOAV has also found that the UK military refuses to say how many RAF bombs fail to explode because, incredibly, to do so would harm the weapon manufacturer’s ‘trade secrets’. War is not only kept from scrutiny by chaos, it is also silenced by corporate interest.
Indeed, as Downy points out, it was only through the controversial Project Maven, which involved Google working with the US Army to use AI for image processing and identification, has more information about how these systems operate come to light and, even then, only dimly.
Palantir, one company that has since taken over Project Maven from Google, further illustrates a problematic theory of war ushered in by AI which drives the US Pentagon’s future strategy. Palantir’s heralding of AI as a system to make predictions and its commitment to “dynamically deter” threats summons forth the martial logic of pre-emptive strikes and runs up against the actuality of AI’s often flawed predictive capabilities.
Downey’s main argument is that the use of AI in the military, particularly in predicting the threat level of objects, is deeply imperfect, based on statistical and probabilistic models rather than a hundred percent certainty.
This can lead to terrible errors (that in themselves go unpunished), such as in the case of the Zemari Ahmadi incident, where a drone mistakenly identified Afghan civilians as a threat and killed ten people, including seven children. Through such tragedy, Downey suggests that the use of AI in warfare has led to the delegation of decision-making to these black box systems, reducing human control and making us mere calibrators of systems over which we have no control.

Downey suggests that these systems suffer from a fundamental brittleness that can lead to unpredictability and even fatal errors. In the context of modern warfare, the development of AI has created a new era of hyper-surveillance and control, with virtual occupation becoming a possibility.
The potential for algorithmic hallucinations and psychopathology associated with algorithms is also a concern, as neural networks can decide with 99.9% certainty that an object is one thing when it is obviously something else and with 100% certainty that a ‘fooling image’ of random pixels is an identifiable object. These flaws of identification make AI prone to hallucinating threats into being, producing what Downey refers to as ‘phantasmagorical spaces’ in which threat is always on the horizon at any moment. Such psychopathology may lead to incredibly tautological and fatal logics, as exemplified by George W. Bush’s military policy in 2002 where the President of the United States stated that the US military had to be “ready for preemptive action when necessary to defend our liberty.”
Downey also highlights the impact that hyper-surveillance and the fear of imminent death by drone have on populations. The regime of impending death produced by autonomous weapons is significant not only due to its fatalities but also due to the evolved forms of trauma it induces among communities, effectively occupying and controlling the future trajectories of these populations. This potential for autonomous weapons to induce hypervigilance and fear in populations is a major concern that needs to be addressed. Legislation must be put in place to address the psychological impact of autonomous weapons. The Airspace Tribunal, a five-year project aimed at documenting evidence of threats from above and developing a new human right to be free from such threats, is a step in the right direction. The evidence gathered through the tribunal will be presented to the United Nations later this year.
The use of AI in military operations is not a new concept, and the development of autonomous weapons has been a topic of discussion for years. Downey notes that only a few countries, including the US, UK, China, Turkey, and Israel, have the technology to produce autonomous weapons. However, he suggests that a major civilian harm incident caused by autonomous weapons would need to occur before legislation is put in place to ban them.
In conclusion, Downey’s insightful speech on decolonising machine vision and algorithmic anxieties sheds light on the dangers of using AI in military operations and the perpetuation of neo-colonial violence which may accompany its use. The potential for fatal errors and the delegation of decision-making to black box systems are major concerns. The psychological impact of autonomous weapons on populations must be addressed, and legislation must be put in place to ban them.
As AI continues to shape the future of warfare, it is crucial to ensure that these technologies are developed and deployed responsibly, with a focus on upholding human rights and promoting peace and security for all.
Watch the talk here:
Did you find this story interesting? Please support AOAV's work and donate.
Donate