Holding autonomy to account: Legal standards for autonomous weapon systems

15 Sep 2021

Article by  and


The March 21 UN Panel of Experts on Libya report, citing the use of an autonomous armed drone by Turkish-backed Government of National Accord Affiliate Forces to attack retreating Hafter-Affiliated Forces in Libya, suggests the prospect of weapons with autonomous functionality (AWS) is a reality. While a pre-emptive ban of AWS remains elusive, there is consensus among States party to the CCW that “IHL continues to apply fully to all weapon systems, including the potential development and use of lethal autonomous weapon systems.”

Autonomy in weapons is not unlawful per se, however normative questions concerning the application of existing IHL rules remain: what legal standards apply to the use of AWS and how is legal responsibility and accountability distributed between the State, its agents (for example, the military commander responsible for the AWS use), and the AWS designer or manufacturer? Multiple legal regimes may interact to attribute legal responsibility—based on differing standards—for diverse legal objectives relevant to design and use, including accountability, deterrence, compensation, and punishment.

This post attempts to survey the legal standards relevant to the development, acquisition, and use of an AWS in armed conflict and highlights some of the particular challenges that autonomy in weapon systems can pose to States and to those who set the standards and are responsible for compliance.

Challenges to Article 36 Legal Review of AWS

It is axiomatic that machines are not legal entities subject to IHL rules and responsibilities. For States party to Additional Protocol I, Article 36 requires legal reviews of weapons prior to their use in armed conflict. Weapon reviews are an internal State process. A reviewing State must determine what standards an AWS should first meet to determine legality per se in accordance with IHL and other laws binding the State. Then that State must determine the legality of its use in particular circumstances. Despite this requirement, existing State practice does not appear catered to weapon systems that incorporate autonomous functions. Moreover, reviewing a system designed to undertake functions previously the preserve of humans and that may independently change poses novel testing and evaluation challenges.

When determining if a weapon system is legal per se, States must identify the autonomous functionality that engages IHL obligations and apportion compliance standards relevant to the expected use in the design and development phase of new AWS. While weapon reviews are not intended to create subjective standards, in the absence of internationally recognized standards, States must interpret their IHL and international law obligations to create new compliance standards. For each function the machine conducts that triggers an IHL obligation, design criteria must be pre-programmed into the AWS to meet those legal standards. These criteria must also be tested vigorously and meet a pre-determined standard of predictability and compliance.

Establishing pre-programmed legal standards of compliance is unlikely to be straightforward. There may be multiple standards determined by factors including legal risk, complexity of the operational environment, and the ability to maintain human lines of legal responsibility and accountability. These standards may vary depending on the specific task and the anticipated operating environments of the AWS. For example, the standard of compliance for distinguishing between lawful and unlawful targets is likely to be higher in a complex urban operating environment compared to a remote, uninhabited one.

Then, having determined an AWS is lawful per se, a State must set the standards to which AWS must perform to be lawfully used in a particular operational context. Depending on the autonomous functionality, this may include compliance with the IHL prohibitions against weapons using methods of warfare of a nature to cause unnecessary suffering or superfluous injury (Article 35(2) of AP1), indiscriminate attacks (Articles 48 and 51(4) of AP1), or disproportionate attacks (Article 57(2)(a)(iii) of AP1). Inability to meet State-imposed performance standards for legal compliance will influence the Article 36 weapon review.

A few other questions also flow from these two prongs of the legal review process. First, should States adopt a standard of human equivalency, or behaviour-based or performance-based standards? While IHL holds humans to a “reasonable” standard, a State may require AWS performance to be of a higher standard, as a matter of both policy and law. Equally, a determination as to whether the machine can meet the same standard expected of humans in the same situation is challenging, if not impossible, to determine given the inherently complex, diverse, and uncertain nature of conflict environments. Second, a State must then determine which of the numerous validation frameworks and software evaluation methodologies will prove that under these complex conditions in a real-world armed conflict—arguably the most complex and uncertain of environments—these legal standards can be met. Finally, States must consider what technical standards an AWS must meet in terms of trust, predictability, explainability, and reliability to comply with the law.

Black and white picture of American soldier looking into the distance.
Should autonomous systems be held to a standard of human equivalency, or something higher? Image by Amber Clay from Pixabay 

Responsibility for AWS in armed conflict

The legal review of a weapon only gets a State as far as certifying that an AWS is capable of being fielded, consistently with IHL obligations. Once in a situation of armed conflict, States are primarily responsible for implementing IHL, and therefore ensuring AWS are used in compliance with IHL. IHL imposes specific obligations upon States and includes individual mechanisms to give effect to these responsibilities.

One such mechanism—although rarely considered or utilized—could be the establishment of an International Fact-Finding Commission, to inquire into facts of an alleged violation associated with a State’s use of an AWS (Article 90 of AP1). A relevant question for a fact-finding commission may be the conduct and results of an internal weapon review. If a State were to forgo their weapon review obligation or adopt legal standards below that necessary for lawful AWS use when fielded, the State bears the risk of deploying unlawful weapons in armed conflict. They would then need to, when applicable, pay compensation for unlawful death and destruction resulting from its use. In fact, the responsible State may be required to pay compensation for any violation of the Convention (Article 91 of AP1; Hague Convention IV, Article 3).

In developing AWS, States must therefore consider their technical ability to inform their enquires into IHL compliance, whether supporting specific fact-finding mechanisms, or more generally. Some autonomous neural systems preclude traceability of actions resulting in a so-called AI “black box.” Other neural networks can back-trace and map the link between the inputs to the system and the machine’s decision-making at the time. Accordingly, there must be consideration as to assigning responsibility for AWS at a particular point in its design, programming, or use, and ascribing standards to comply with those requirements and create records for such compliance mechanisms. Flowing from this is the requirement for the State to assess the AWS’s capacity to create records—and to what technical standard—to enable fulfilment of their obligation to investigate breaches of IHL. Must the records support criminal investigations under domestic regimes, or simply be an ability to trace by whom the autonomous system’s resultant behaviour was initiated?

States are required to implement measures for the execution of their IHL obligations. This includes making legal advisors available to their military commanders (Article 82 of AP1) and by ensuring the members of their military are trained in IHL (Articles 80 and 83 of AP1). Finally, States are responsible for repressing breaches of IHL (Articles 85 of AP1). These combined obligations mean States must ensure that their operators, commanders, and legal advisors are appropriately trained to understand the capabilities and limitations of the AWS. It is necessary for the users to understand how the AWS can be used in compliance with IHL in a particular circumstance and, where necessary, intervene in the AWS operation to ensure compliance and safeguard protected persons and objects.

Individual responsibility for AWS use in armed conflict

IHL imposes specific obligations upon individuals. The IHL rule on precautions in attack, for example, requires “those who plan or decide upon an attack” to do “everything feasible” to distinguish between lawful and unlawful targets and “take all feasible precautions” to avoid collateral damage. What is “reasonable” and “feasible” is contextually dependent. For AWS it should mean reasonableness given a person’s reliance upon an autonomous function subsuming an action that would have previously been conducted or controlled by themselves.

The obligation to do what is reasonable and feasible is followed by accountability. A fundamental premise of international criminal law is to hold individuals who commit grave breaches of IHL to account for their unlawful actions in armed conflict. This raises challenges in the context of AWS. How can an individual be held criminally responsible for the artificial decisions undertaken by an AWS under the person’s meaningful human control?

Part of the answer must lie in how the standard is interpreted. In the case of Galić, the International Criminal Tribunal for the former Yugoslavia applied a “reasonable military commander” standard to determine whether an attack was unlawfully disproportionate. The Tribunal observed that clothing, activity, age, and gender were relevant to determine civilian status, as was movement, shape, colour, and speed in identifying a civilian object.

If an AWS was designed to recognize such indicia and determine the civilian status of persons and objects, the State must have determined, as a matter of law, an AWS performing a distinction role must achieve an equal or higher standard than reasonable belief required by a human. The commander using the AWS would then be able to consider its use lawful in the particular operation. Ascribing a higher standard to machines remains fraught, however. What is considered “reasonable” in the context of autonomous decision-making systems requires the fielding State to carefully consider differing conditions applicable in every potential use and make legal and policy decisions accordingly.

A requirement for meaningful human control over the critical functions of the AWS is increasingly espoused as a prerequisite for its lawful deployment in armed conflict. Whether meaningful human control is a legal requirement or an ethical or moral obligation is the subject of ongoing debate. Centred on the basis that the machine is under control and supervision and is conducting tasks only at the behest of its human operator, human accountability and responsibility remains for the use of the system with functionality that engages IHL obligations, regardless of which functions for the use of force have been delegated to the machine. Thus, a State must also consider a standard relevant to make the level of control and intervention necessary for the autonomous functionality to be considered “meaningful.”

Article 30 of the Rome Statute holds persons criminally liable if they commit the elements of an offence applying the default mental standard of “intent or knowledge.” If a military commander deployed an AWS designed to recognize the Galić indicia described above, knowing that it is incapable of doing so in the environment in which it was deployed, and if that deployment resulted in the killing of civilians, could such action meet the mental element of intent or knowledge to be criminally responsible for the war crime of willful killing under Article 8(2)(a)(i)?

Further obscuring the principle of legal accountability being ascribed to human actors is the distributive nature of autonomous systems themselves. Different people are responsible for designing the algorithm, programming it, testing it, training it, deploying it, and potentially for authorizing specific actions conducted by it. In this regard, the criticisms for the dilution of individual responsibility in aerial bombing campaigns by the use of a “kill chain” and “targeting system” are more pronounced. Autonomous systems are even more dispersed and may include actors who were unaware of how and when their work would be put into practice. The attribution for mistake, negligence, malfeasance, or failure to meet IHL standards may occur cumulatively and will be difficult to trace. How to establish standards for responsibility—held by a person in a system of controls that disperses responsibility across a number of actors—will prove a challenge.

soldier resupply from an unmanned aerial vehicle
The principle of legal accountability is obscured by the many hands required to field AWS, from program design through to deployment.

Manufacturers’ liability for AWS design, development, and use

Advanced technologies, particularly in relation to those that incorporate autonomous functions, also challenge notions of responsibility for IHL compliance attributed to a designer or developer for the use of the system in a particular operational environment.

The question of whether negligence even applies—and to whom—in armed conflict remains uncertain. Equally, there is ongoing pressure to consider applying IHL to corporations that play an increasing role in armed conflict, utilizing technological advantages provided by AI systems. What responsibility exists, for instance, for a company that knowingly exports its AWS to a State that relies upon the company’s assurances of IHL compliance built into that AWS (noting that IHL obligations differ from State to State)? Likewise, the extent to which product liability law applies during armed conflict remains unsettled. Some people consider that the ability to trace actions to developers and designers in armed conflict is too long a bow to draw to result in criminal responsibility for use of the autonomous system.

While there was near State consensus at the recent UNCCW GGE on LAWS that regulation of LAWS must apply to the whole life cycle of a weapon system, it is unclear how this translates to those responsible for the design and development of an AWS. That input may occur years prior to its use, potentially with no knowledge of how the system or algorithm may be used. The possibility for dual-use autonomous technology to be incorporated into weapon systems further exacerbates these legal responsibility considerations.

Considering the broader system of State controls for the use of force, weapons without autonomous functions are designed and tested to meet specified performance standards, after which they are then introduced into a State’s military arsenal. The same staged process cannot be applied to AWS. For systems incorporating autonomy, algorithms and data inputs will directly affect the target, selection, and strike decisions that would have ordinarily been undertaken by human actors in armed conflict. Turning to parallels outside of armed conflict for examples of liability processes does not seem helpful. Recent cases relating to the self-learning functions of autonomous vehicles demonstrate that the presence of a human controller generally defers legal responsibility to that actor rather than to the design or developer. But what if a design flaw causes an IHL breach during use in armed conflict, such as an inherent data bias in the algorithm, an undisclosed component of the autonomous neural network that applies gender or ethnic-specific biases to the identification processes of that neural network?

What next?

This post has flagged some of the issues that vex the international community and States in their ability to set legal standards relevant to the design, acquisition, and performance of AWS. Although not insurmountable, it is evident that the integration of legal assessments and considerations throughout the design, development, acquisition, and deployment processes associated with AWS is critical to enabling States—and those that use them—to meet their legal obligations in fielding AWS in situations of armed conflict.

In addition to the many challenges faced in the regulation of autonomous weapons systems prior to their lawful use in armed conflict, significant questions remain about whether the standard adopted should be the same as, or better than, a human decision-maker; and what responsibilities will be passed on to the designers and developers of these systems given they are “closer” to the operational decision-making than their traditional weapons manufacturing peers have been in the past. Such standards will all involve complex considerations that take into account State political, economic, and strategic goals, the will of industry to cooperate, and the ability of lawmakers to influence these outcomes.

Regardless of what standard is applied, absent any binding legal instruments to the contrary, it is evident that there will be a period when State practice interpreting existing rules will shape how IHL will adjust in the future to this new technology—hopefully for the betterment of humankind rather than to its detriment.


This article appeared on Articles of War, published by the Lieber Institute, US Military Academy, West Point. It is informed by the ongoing research of the Law and the Future of War Research Group. The views and opinions expressed herein are those of the authors, and do not necessarily reflect the views of the Australian Government or any other institution.

 

Latest