The debate about regulating military use of autonomous weapon systems has been running in UN forums for approximately seven years, and even longer in the broader policy and academic communities. By most accounts, progress has been unsatisfactory. One of the main stumbling blocks has been a general failure to agree on common terminology and definitions of the technologies being discussed. It is an understandable problem; machine autonomy is a new and complex phenomenon and, as the subject of an arms control negotiation, it is rather amorphous. ‘Autonomy’, in this context, does not refer to a particular artefact, device or capability that can be easily identified and subjected to regulation. Rather, it refers to a functionality that might be available to different extents in a wide range of weapon systems that appear similar at first glance, and might depend as much on how the weapon is used as on its intrinsic nature.

Terminology: AWS, LAWS, Killer Robots,…?

Autonomous weapon systems (AWS, the term preferred by our research group) are weapon systems that can perform their critical functions of selecting and engaging targets with some significant degree of autonomy.

Some commentators use the terms ‘lethal autonomous weapon system’ (LAWS) or ‘lethal autonomous robot’ (LAR) instead. The exact meaning of the ‘lethal’ qualifier is generally not explained, but it is sometimes used in a way that suggests a distinction between AWS that are deployed in anti-personnel roles and those used only in anti-materiel roles. In our view, the distinction is not significant for the purposes of a legal analysis. It also risks mixing up questions about autonomy with questions about regulating lethal vs non-lethal weapons.

Others describe highly autonomous weapons as ‘fully autonomous weapon systems’. ‘Fully’ is misleading in this context as it suggests that the weapon needs – or, according to some authors, allows – no human interaction at all. Autonomy is a matter of degree and no machine is fully autonomous; at the very least, human operators must decide when and where an AWS is to be activated and with what goal in mind.

The emotive and sensationalist term ‘killer robots’ is often used in arguments opposing development of AWS. It is technically inaccurate (autonomous technologies are not employed solely for the purpose of killing, and not all AWS are properly described as robots) and appears to be intended to support a preferred regulatory outcome.

Competing Conceptions of Autonomous Weapons

Likewise, different countries have adopted a range of differing criteria for designating a weapon system as ‘autonomous’. Some, notably the United States, use broad definitions which focus on the role of the human operator:

‘A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.’

By this definition, a number of existing weapon systems may be categorised as ‘autonomous’. IAI’s Harpy and Harop loitering munitions can be launched from behind the battle zone and sent to a designated area where they loiter and search for targets emitting radar signals, which they then attack without needing to seek instructions from an operator.

IAI Harpy
IAI Harop loitering munition  | Image © Israel Aerospace Industries

Any of the various Close-In Weapon Systems (CIWS) deployed on land and at sea for several decades might also qualify as, once activated, they can detect and fire upon incoming threats without human intervention.

Phalanx Weapon System
Phalanx Close-In Weapon System | Image © Raytheon Technologies

Other countries use more restrictive definitions based on some threshold level of autonomous capability. China, in a 2018 position paper, proposed a set of criteria including lethality, a high level of autonomy (‘absence of human intervention and control during the entire process of executing a task’), ‘impossibility for termination’, ‘indiscriminate effect’ and ‘evolution, meaning that through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations’. No existing weapons fit that definition, nor are any likely to in the foreseeable future. The United Kingdom also originally proposed very stringent criteria, although there are more recent signs that their position may be softening.

While general consensus is still missing, there are some signs that the critical factor for regulatory purposes is the type and degree of human control that can be exercised over a weapon system while it is operating.

Applicable law

There is general agreement that international humanitarian law and other relevant bodies of law apply to AWS as they do to other types of weapons, but views differ about interpretation and application of some specific rules. Those disagreements can often be traced to different understandings of autonomy and definitions of AWS. For example, it is often claimed that AWS are inherently illegal because they cannot distinguish between civilian and military targets, or cannot assess the proportionality of an attack, in some hypothesised circumstances. Such claims implicitly position the AWS as a stand-in for a human operator in a legal sense, something more like a combatant than a weapon. The more natural reading of the law is that distinction and proportionality obligations remain with those personnel who plan and decide upon attacks, and the expected behaviour of the weapon in the circumstances of the attack is something that those personnel must take into account when they are selecting a means of conducting the attack.

Weapon reviews

Countries that are parties to the First Additional Protocol to the 1949 Geneva Conventions are obliged to conduct a legal review as part of the ‘study, development, acquisition or adoption’ of new weapons, to ensure they are compatible with all the country’s legal obligations. AWS present some significant challenges in this regard.

AWS rely on highly complex software-based control systems that, like other complex software systems, are difficult to comprehensively test; particularly so, given the unpredictable and chaotic battlefield environments in which they will be required to operate. Further, as autonomous technologies develop, AWS will, by design, operate to a greater degree in circumstances where human intervention is undesirable or infeasible, such as in communications-denied environments, where the tempo of battle is too fast, or where risk to humans is unacceptably high. That capability inevitably brings with it a type and degree of risk not present with manually operated weapon systems where a human operator can readily intervene when things go wrong. The possibilities of runaway failure or of an AWS being compromised by an adversary are often noted.

Weapon review processes therefore assume an extra level of importance, but also suffer from extra challenges due to the complexity and opacity of the software involved and the difficulty of anticipating all sets of circumstances that an AWS might encounter during an operation. There are unsolved problems yet to be addressed, both in terms of developing appropriate software testing methodologies and ensuring that all States which operate AWS do in fact review their weapon systems appropriately.

The way ahead

The prospects of reaching a meaningful international consensus at the UN-hosted meetings on AWS seem increasingly bleak. After nearly seven years of meetings, States have failed to even agree on a definition of the weapons to be regulated, much less the substance of any such regulations, or even a consensus on whether any additional regulation is needed. It is possible that a non-binding political instrument, a code of conduct, may eventuate, but even that would depend on reaching a consensus about the basic concepts associated with machine autonomy.

To add to the challenge, the technologies of autonomous weapons are advancing relatively quickly, so that the possible applications and reasonably foreseeable consequences of AWS use are developing further beyond regulatory efforts. It is therefore critical that a dialogue be maintained between lawyers, policy makers and weapon system designers so that newly developed AWS will remain capable of being used in compliance with existing law as well as with any developments to the law that may occur in the future.

Key points

  • The debate about regulating use of autonomous weapons is being held back by a general failure to agree on basic definitions and terminology.  By some definitions, numerous existing weapon systems are autonomous. By other definitions, autonomous weapons do not exist today, and might never exist. In some cases, definitions appear to be chosen to support a preferred regulatory outcome.
  • There are signs of a growing consensus that the critical factror for regulatory purposes is the type  and degree of human control over a weapon system.
  • Given the current state of the art, fears of autonomous 'killer robots' running amok on the battlefield are unrealstic
  • Weapon reviews will be a critical part of the regulatory framework but more work is needed to overcome the techncal challenges and to ensure that all countries which operate autonomous weapons do in fact review them adequately.

Further reading

Alan Backstrom and Ian Henderson, ‘New Capabilities in Warfare: An Overview of Contemporary Technological Developments and the Associated Legal and Engineering Issues in Article 36 Weapons Reviews’ (2012) 94 International Review of the Red Cross 483

Vincent Boulanin, ‘Implementing Article 36 Weapon Reviews in the Light of Increasing Autonomy in Weapon Systems’ (SIPRI Insights on Peace and Security No 2015/1, Stockholm International Peace Research Institute, November 2015)

Chris Jenks, ‘False Rubicons, Moral Panic, & Conceptual Cul-De-Sacs: Critiquing & Reframing the Call to Ban Lethal Autonomous Weapons’ (2016) 44 Pepperdine Law Review 1

Tim McFarland, Autonomous Weapon Systems and the Law of Armed Conflict Compatibility with International Humanitarian Law (Cambridge University Press 2020)

Marco Sassòli, ‘Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to Be Clarified’ (2014) 90 International Law Studies 308

Paul Scharre, ‘Autonomous Weapons and Operational Risk’ (Center for a New American Security, Ethical Autonomy Project, February 2016)

Other resources

Human Rights Watch, ‘Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control’ (10 August 2020)

‘Convention on Certain Conventional Weapons – Group of Governmental Experts on Lethal Autonomous Weapons Systems’, United Nations Office for Disarmament Affairs Meetings Place (Web Page)


Version 1.0 | 2 October 2020