Autonomy
Autonomy has become a technological buzzword. All kinds of devices and systems attract the label ‘autonomous’. Examples include self-driving cars and driverless trains, as well as some uncrewed ships and aircraft. Also, autonomous weapon systems are, depending on one’s definitions, a current reality or a future prospect.
These technological developments have triggered much ethical and regulatory debate. Policy makers are still grappling with the implications of autonomous systems for legal systems. At the same time, the meaning of autonomy creates much confusion. Participants in regulatory debates talk past each other when they hold diverging views of autonomy.
- Autonomy has different meanings
- Autonomy is relational
- Autonomy is task-specific
- Autonomy is a continuum
- Trusted autonomy
- Key points
- Further reading
Autonomy has different meanings
Part of the difficulty arises from the different uses of the word ‘autonomy’ in different disciplines. Philosophy, ethics and law have a long tradition of talking about autonomy. So, the word carries a fairly precise meaning. In this philosophical sense, autonomy implies self-regulation or self-governance. It refers to the ability of an entity to establish its own rules of conduct and to follow those rules. This tracks the Greek origins of the word autonomy: autós, ‘self’, and nómos, ‘law’.
To speak of philosophical autonomy means to set the bar very high. In some circumstances, even humans lack full autonomy. Consider the context of health care. Respect for patient autonomy, the right of people to make informed decisions about their care, is a foundation stone of medical ethics. But patient autonomy may be undermined by many factors, including cognitive dysfunction, as well as physical, social and economic duress. So, technological systems, built and used by humans, remain far from achieving philosophical autonomy.
In an engineering context, however, autonomy tends to mean something much less elaborate. Here, autonomy refers to the ability of a system to sense its environment and to act in the environment in pursuit of certain goals. This does not imply a human-like ability to define one’s own goals and rules of conduct. For technological systems, these parameters are set by the designers and operators. Technological autonomy, in the most basic sense, refers a system’s ability to perform some task or function without requiring real-time interaction with a human operator.
The technological conceptualisation of autonomy captures a wide range of technologies, both existing and prospective. It therefore permits the examination of current or imminent legal implications, and not only future regulatory issues. To be useful, however, three aspects of technological autonomy warrant further explanation.
Autonomy is relational
Autonomy does not describe an entity as such, but rather the relationship between the entity and other entities. One cannot be autonomous in the abstract. One can only be autonomous from someone or something else.
Technological autonomy is concerned with the interaction between an artificial system and its human operator. Thus, autonomy is a measure of human-machine (or human-robot) interaction. Accordingly, it is impossible to define an autonomous system simply by reference to its technical features. Whether a system is autonomous depends on the role of the human vis-à-vis that system.
Autonomy is task-specific
Describing entire systems as autonomous tends to oversimplification. A system may well be autonomous with respect to some functions but not others.
For example, a simple autopilot allows an aircraft to maintain course and altitude. Such an aircraft thus has autonomy in flight. A more sophisticated system might be able to plot a course from point A to point B and avoid obstacles. In such a case, there would also be autonomy in navigation. Some systems are capable of fully instrumental landing (autoland), making that function autonomous. But, in all these examples, there would still be a need for direct human involvement in, say, refuelling and take-off. These functions, in other words, would not be autonomous.
There is no clear point at which the aircraft becomes autonomous. That said, a taxonomy could be agreed upon. For example, we might decide that an aircraft that can taxi, take off, navigate, and land autonomously amounts to an autonomous aircraft. But that would be shorthand for saying that certain functions, which we have deemed critical, have been made autonomous. Other important aspects of the aircraft’s operation, such as loading, refuelling or maintenance, would still be performed by humans.
Autonomy is a continuum
Autonomy is not an on-off phenomenon. A system does not simply perform a function autonomously or manually. With respect to any function, a system can have more or less autonomy. In other words, autonomy inhabits a continuum or spectrum, with different functions having more or less autonomy. Furthermore, a given system may exhibit different levels of autonomy within a single function depending upon circumstance. For example, an aircraft may plan its own route for part of a trip, but may not do so for other parts of the trip where risk is higher or human judgement is required.
There are two ways of assessing the degree of autonomy. The first approach focuses on the quantity of human interaction required. Under this approach, more autonomy means that the system needs less frequent human interaction. So, the autonomy of an aircraft could be measured by looking at how often the pilot needs to intervene in its operation. The second approach looks at the quality of human interaction. In this sense, autonomy is higher where the system requires higher-level human interaction. So, autonomy would be very high where a pilot can instruct an aircraft to fly from Melbourne to Brisbane. Autonomy would be lower if the pilot needs to specify coordinates, altitudes, air speeds and the like.
Trusted autonomy
As noted, autonomy refers to the ability of a system to do something useful without real-time human interaction. Autonomy is therefore quite independent from the users’ beliefs, values and attitudes. Increasingly, the concept of ‘trust’ is deployed to capture this aspect of autonomous systems. Thus, autonomous systems can be described as trusted or trustworthy where humans are willing to rely on them for the performance of particular tasks. Whether a human trusts an autonomous system depends on a range of factors. These factors notably include properties of the system itself, such as its reliability, predictability and the explicability of its operations. But characteristics of the user (such as propensity to trust or familiarity with the system), and the broader environmental context also play a role.
Key points
- The meaning of the word ‘autonomy’ varies by scientific or scholarly discipline. In philosophy, and in cognate fields such as ethics and law, autonomy tends to have a stricter meaning than in engineering.
- In the engineering context, autonomy refers to the ability of a system to sense its environment and to act in the environment in pursuit of certain goals. In other words, an autonomous system can perform some task or function without requiring real-time interaction with a human operator.
- Autonomy is relational in that it describes the interaction between an artificial system and its human operator, rather than the technical characteristics of the system. Autonomy is task-specific in that a system may be autonomous with respect to some of its functions but not others. Autonomy is a continuum in that a particular function of a system may be more or less autonomous.
Further reading
Jenay M Beer, Arthur D Fisk, and Wendy A Rogers, ‘Toward a Framework of Levels of Robot Autonomy in Human-Robot Interaction’ (2014) 3(2) Journal of Human-Robot Interaction 74
John Christman, ‘Autonomy in Moral and Political Philosophy’ in Edward N Zalta (ed), Stanford Encyclopedia of Philosophy (online, last revised on 29 June 2020)
S Kate Devitt, ‘Trustworthiness of Autonomous Systems’ in Hussein A Abbass, Jason Scholz and Darryn J Reid (eds), Foundations of Trusted Autonomy (Springer 2018) 161
Willem FG Haselager, ‘Robotics, Philosophy and the Problems of Autonomy’ (2005) 13 Pragmatics & Cognition 515
Michael Lewis, Katia Sycara and Phillip Walker, ‘The Role of Trust in Human-Robot Interaction’ in Hussein A Abbass, Jason Scholz and Darryn J Reid (eds), Foundations of Trusted Autonomy (Springer 2018) 135
Rain Liivoja, Maarja Naagel and Ann Väljataga, Autonomous Cyber Capabilities under International Law (Research Paper, NATO CCDCOE, July 2019) ss 1.1 and 1.2
Tim McFarland, ‘The Concept of Autonomy’ in Rain Liivoja and Ann Väljataga (eds), Autonomous Cyber Capabilities under International Law (NATO CCDCOE, forthcoming in 2021)
Tim Smithers, ‘Autonomy in Robots and Other Agents’ (1997) 34 Brain & Cognition 88
Version 1.0 | 2 October 2020
UQ Law research
Connect with our researchers
Collaborate with us to solve today's pressing challenges. Find out how we can work together.
Find a researcher by name
Find researchers by research area
Research themes & challenges
Potential HDR projects
Summer/Winter research scholarships
What's on
Research groups
Australian Centre for Private Law
Centre for Public, International and Comparative Law
Food Security and Intellectual Property
Indigenous People and the Law
Law and the Future of War
Law and Religion in the Asia-Pacific
Law, Science and Technology
Marine and Shipping Law Unit
UQ Solomon Islands Partnership