Categories

AOAV: all our reportsExplosive violence in GazaExplosive violence in IsraelExplosive violence by the Israeli Defense Forces

Is Israel using artificial intelligence to target Gaza?

This week a widely shared and comment upon article from +972 Magazine, written by Yuval Abraham, alleged a highly controversial and deeply troubling development in modern warfare.

According to the report, the Israeli Defence Forces (IDF) have allegedly utilised an artificial intelligence system named “Lavender” to direct airstrikes in Gaza, marking an unprecedented move in the use of technology for military targeting. The IDF has allegedly identified tens of thousands of Gazans as potential targets, with the AI system reportedly having minimal human oversight and a seemingly permissive stance on civilian casualties. If true, this method of targeting raises significant ethical, legal, and humanitarian concerns, particularly regarding the distinction between combatants and non-combatants and the principle of proportionality under international law.

The system, as described, processes vast amounts of data to generate lists of suspected militants for assassination. These lists are then acted upon with little human verification, transforming the nature of decision-making in warfare. The alleged reliance on AI for such critical, life-and-death decisions underscores a disturbing trend towards depersonalizing conflict, potentially leading to a higher risk of errors and civilian casualties. The AI’s reported error rate of about 10 percent—while high in the context of targeted killings—indicates a troubling acceptance of collateral damage, further complicated by the reported targeting of individuals in their homes, often at night, and the use of unguided missiles against supposed low-level militants.

This approach to warfare, if it is accurate, marks a significant and potentially dangerous evolution in military tactics. It questions the balance between the pursuit of military objectives and the safeguarding of civilian lives, a cornerstone of the law of armed conflict. The implications of such a strategy extend beyond the immediate conflict, potentially setting a precedent for future military engagements worldwide and challenging the international community’s capacity to regulate and oversee the use of advanced technologies in warfare.

The Israeli army’s formal denial of the specifics of the report, particularly the existence of a kill list and the characterization of Lavender as merely a database for cross-referencing intelligence sources, highlights the opaque nature of military operations and the difficulties in assessing the ethical and legal implications of using AI in conflict. This denial, juxtaposed with the detailed allegations, underscores the need for greater transparency and accountability in military operations, especially those involving advanced technological systems.

Moreover, the report touches upon the broader implications of such military practices for the civilian population of Gaza, emphasizing the profound humanitarian toll of the conflict. The alleged targeting policies, including the purported authorization of significant civilian casualties for the assassination of high-ranking militants, raise profound concerns about the conduct of hostilities and the protection of civilians in armed conflict. These concerns are compounded by the reported mass displacement and devastation experienced by the population of Gaza, underscoring the urgent need for a reassessment of the rules of engagement and the ethical frameworks guiding military operations.

In sum, the revelations from the +972 Magazine report, if substantiated, necessitate a critical examination of the role of AI in warfare, the ethical boundaries of military engagements, and the mechanisms in place to protect civilian lives in conflict zones. The international community, alongside national authorities, must grapple with these challenges, ensuring that advancements in technology do not outpace our collective moral and legal responsibilities.

As Dr. Iain Overton, Executive Director of Action on Armed Violence, says: “In an era where the rules of warfare could be profoundly disrupted by the use of artificial intelligence, the ethical considerations of how that might erode the basic protection of civilians cannot be overstated. The deployment of future systems like ‘Lavender’ – without rigorous oversight and accountability – threatens not only the principles of international humanitarian law but the very fabric of our shared humanity.”