•  
  •  
 

Abstract

Key global powers are engaged in the development of artificial intelligence (“AI”) for military purposes, and it is widely accepted that the development and deployment of AI tools will lead to a revolution in military strategy and the practice of warfighting. The question is whether these tools can be designed, developed, and deployed in a manner that facilitates compliance with international legal obligations—in particular the law of armed conflict and international human rights law—and if so, how. To date, this question has not been answered satisfactorily. This article examines how concepts and procedures derived from international human rights law can combine with the law of armed conflict to inform militaries’ AI-related decision-making processes. A human rights-based approach to the decision-making process centres around the identification of the intended benefits of an AI tool—including an elaboration of the intended circumstances of use—and an identification of the potential harms, so that these “competing interests” can be assessed. Reference to either an appropriate evidence base, or reasoned justification, is essential. This should be capable of “convincingly establishing” the claimed benefit of a measure, and of providing a full evaluation of the potential harm. This article begins by providing some examples of how AI is, or is likely to be, used by the military. It then discusses the current approach to the military use of AI, highlighting the need to develop appropriate guidance capable of influencing militaries’ decision-making processes. It then sets out, in broad terms, a human rights-based approach to AI, before turning to a more in-depth examination of how a human rights-based approach can help inform the decision to deploy an AI tool. The final sections of this article then discuss how the intended benefit, and the potential harm, of an AI deployment can be evaluated.

Included in

Law Commons

Share

COinS