Categories

AOAV: all our reportsMilitarism examinedMedia, culture and armed violence

AI and defence: the ethical, legal, and strategic approach of the MOD to autonomous weapon systems critiqued in House of Lords

Artificial Intelligence (AI) has significantly expanded into various domains, including defence. This transformation has led to AI’s applications in military operations, ranging from optimising logistics to processing large volumes of intelligence data. The introduction of AI into warfare, particularly through AI-enabled Autonomous Weapon Systems (AWS), stands as one of the most revolutionary and contentious technological advancements in defence. AWS has triggered debates on compliance with existing rules and regulations of armed conflict, which are primarily geared towards humanitarian purposes.

This, at least, is the initial observations of a comprehensive report ‘Proceed with Caution: Artificial Intelligence in Weapon Systems‘ – an inquiry paper produced by the House of Lords. The background of the inquiry, initiated by the Liaison Committee’s recommendation and supported by the House’s Select Committee on AI, is detailed. The Committee’s comprehensive investigation included numerous submissions and evidence sessions, reflecting the depth and complexity of AI’s role in weapon systems.

The UK Government’s stance on this issue is to be “ambitious, safe, responsible.” However, there is a gap between aspiration and reality, necessitating proposals to ensure that the Government’s approach to the development and use of AI in AWS is both ethical and legal. This approach is crucial for achieving strategic and battlefield advantages while maintaining public understanding and democratic endorsement.

The report expresses disappointment in the Ministry of Defence’s lack of engagement in monitoring public attitudes towards AWS. Ensuring public consultation and placing ethics at the centre of policy, including expanding the role of the Ministry’s AI Ethics Advisory Committee, is emphasised as crucial for retaining public confidence.

On an international scale, the Government is urged to lead in engagement on AWS regulation. Although the AI Safety Summit was a positive initiative, it did not cover defense applications of AI. The Government is encouraged to include AI in AWS in its efforts to ensure human-centric, trustworthy, and responsible AI.

The ongoing international debate over AWS regulation could result in various outcomes, such as legally binding treaties or non-binding measures clarifying the application of international humanitarian law (IHL). These discussions are crucial for accelerating efforts towards an effective international instrument for AWS regulation.

A particularly sensitive issue is the use of AI in nuclear command, control, and communications. While AI advancements could enhance the effectiveness of these systems, they also pose risks of spurring arms races or increasing the likelihood of accidental or intentional nuclear escalation. The need for a balanced and cautious approach in this area is highlighted, with a call for the Government to adopt an operational definition of AWS.

The report stresses the importance of maintaining human control at all stages of an AWS’s lifecycle, addressing concerns about systems where autonomy is enabled by AI. Human control is vital for ensuring moral agency and compliance with legal standards, particularly in line with international humanitarian law.

The Ministry of Defence’s procurement processes, especially in software and data management—key areas for AI development—are critiqued for their lack of accountability and bureaucratic nature. A revolutionary change in these processes is suggested as necessary for timely progress.

While some aspects of AI application in defense, such as general data analysis, cyber defense, intelligence, surveillance, reconnaissance, and logistics, are less controversial, the use of AI to enable certain AWS that can operate with minimal human intervention is a point of significant contention. The report delves into this dichotomy, examining the roles and potential impacts of AI in different defense applications.

The report also makes distinctions between different types of AI, defining the Ministry of Defence’s view of AI as technologies enabling machines to perform tasks that normally require human intelligence. This includes narrow AI, which focuses on specific tasks, and general AI, which simulates broader human cognitive abilities. The current focus is on narrow AI due to its prevalent application in the defence sector.

Key recommendations from the Defence AI Strategy and the accompanying policy statement “Ambitious safe responsible” are discussed. These include the need for ethical frameworks and advisory panels to guide AI’s application in defence, along with a focus on minimising biases in datasets and maintaining responsibility and accountability – something that AOAV has criticised in the past.

The report also addresses the Government’s efforts in AI regulation through various policy papers and strategies, highlighting the emphasis on a context-specific approach for sectors like defense. This approach aims to enable continued development of regulatory mechanisms based on sector-specific needs and challenges.

IHL’s role in setting limits on the development and use of AWS is underscored. Despite no specific international legal prohibitions on AWS, IHL provides a framework for regulating new weapons development and use, including AWS. This aspect is critical for ensuring that the development and use of AWS are aligned with international legal standards.

Efforts to regulate AWS through international bodies like the United Nations are detailed. These include discussions on the ethical and lawful use of AWS, emphasizing that they must not target civilians or result in excessive collateral damage. The UK, alongside other nations, has reaffirmed its commitment to ensuring that AWS comply with international humanitarian law and ethical principles.

The AI Safety Summit, though not covering defence AI, is highlighted for its focus on managing risks associated with AI advances. The outcome, the Bletchley Declaration, emphasises the need for safe, human-centric, and responsible AI design and use. The report commends the Declaration and encourages the Government to apply its principles to AI in defence.

Dr. Iain Overton, Executive Director of AOAV, commented on the report, stating, “This comprehensive analysis illuminates the complex interplay between ethical imperatives and technological advancements in military AI. It underscores the urgent need for robust legal frameworks and public engagement to ensure AI’s integration into defence respects humanitarian principles and strategic stability. Let’s hope the MOD listens to the reports’ findings. The cynic in me doubts they will.”

In summary, the report covers the implications of AI-derived autonomy in AWS, its potential impacts on the battlefield, and the application of IHL to AWS. It also examines the UK Government’s domestic approach to AWS development and use, emphasising the crucial roles of human elements like empathy, judgment, morality, responsibility, and conscience, which set us apart from AI systems.