Robotics & Autonomy

Sections Bottom

New Report Available
Roff, Heather M. “Meaningful Human Control or Appropriate Human Judgment? The Necessary Limits on Autonomous Weapons” Briefing paper prepared for the Review Conference of the Convention on Conventional Weapons, December 2016 .

download report

Autonomy, Robotics & Collective Systems

Our research focuses on the interaction between autonomous systems and robotics.   In particular, we view the coupling of autonomy and robotics from a three pillared approach to collective systems.   From one perspective, this research pursues knowledge development on resilient autonomous collective systems, such as biologically inspired and engineered swarms.   This approach sees the collective system as one unit, constituted by multiple individual autonomous robots. 

From a second perspective, our research interrogates the collective system as constituted by both humans and robots.   Here the system is the result of the interaction between humans and robots, and how we engineer and design autonomous robotic systems to work in conjunction with human team members to pursue collaborative goals.   Human factors, human robot interaction, and human computer interaction, are all subsets of this perspective.

Finally, our research also examines the Socio– Political–Technical collective system within which autonomy and robotics is embedded.  In this space, we attempt to raise and answer questions regarding the ethics, law, and policy regulation of autonomous agents and robotics.   Such systems can include:  artificially intelligent agents; lethal autonomous systems; autonomous robotics and vehicles; and cyber security related issues.   Given the vast deployment of differing kinds of systems throughout our communities and economies, the importance of the socio-political collective system is equally important.

Our interdisciplinary approach to research is driven by a set of diverse faculty, researchers, and students. GSI’s work in this area draws from disciplines that range from not only engineering and computer science, but moral philosophy, political science, law, sociology, history, anthropology, psychology, mathematics and physics.


Project: Artificial Intelligence, Autonomous Weapons, and Meaningful Human Control

This research was supported as part of the Future of Life Institute ( FLI-RFP-AI1 program, grant #2015-146617.

Principal Investigators: Dr. Heather Roff, Research Scientist, Global Security Initiative ASU
Richard Moyes, Managing Director of Article 36

Project Narrative:  We address how technological developments in artificial intelligence (AI) affect the relationships between society, AI and autonomous weapons systems.  As weapons increasingly rely on AI, this project provides an interdisciplinary framework for how such weapons systems might be under meaningful human control.

Briefing Paper delivered to the State delegates at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva 11-15 April, 2016. 

Dataset: Survey of Autonomous Weapons Systems

Download the Full Dataset Here

Dataset Overview:

The Survey seeks to develop a general knowledge base of the developing role of AI in weapons systems. To do this, we created a working understanding of the current state of the art in presently deployed (and several emerging) weapon systems. The current state of play is important because it informs the way researchers and developers approach the design of the systems, as well as how they may seek further developments in the future. Moreover, this is key because it aids in the contribution of international legal policy discussions, which may thereby affect policy formulation.

We offer, therefore, a freely-accessible and public dataset of autonomous capabilities in weapons systems. Each system is coded according to domain (air, land, sea, space), and functionality (mobility, navigation, identification and selection), as well as many other metrics.


There is no internationally agreed upon definition of “autonomy” for weapons systems, nor is there consensus on the features or traits that combine to make up an autonomous system.
For the purpose of this study, we considered three orthogonal sets of features that describe requirements for a weapons system to be fully autonomous.  These were selected to span the three main characteristics of autonomous weaponized agents: 1) They can move independently through their environment to arbitrary locations; 2) They can select and fire upon targets in their environment; and 3) they can create and or modify their goals, incorporating observation of their environment and communication with other agents.  These sets each represented a spectrum of capabilities, to which the underlying features contributed.  A weapons system considered “fully autonomous” would have a large number of the enumerated features in each of the sets.  By categorizing features in this way, a framework was created for comparing existing and experimental weapons systems and observing the emergence of autonomous qualities over time.

Data is taken from the top 5 weapons exporting countries according to the Stockholm International Peace Research Institute’s findings in the Arms Transfers Database. These countries in order of volume are:

  1. United States
  2. Russia
  3. China*
  4. France
  5. Germany

*Open source publicly available data for arms transfers from China is difficult to validate.

From this sample, we looked at the top weapons manufacturers within each country. Coded systems are presently deployed systems. For the purposes of showing emerging trends, we have included a small set of developmental systems that are not presently deployed in the unmanned aerial vehicles, unmanned surface vehicles, and unmanned underwater vehicles. The developmental systems are limited in publicly available information regarding their full capabilities. All information is taken from publicly available open-sourced resources. Information regarding Russian and Chinese systems is somewhat limited.

A general survey of existing and emerging weapons systems was conducted, using the widest array of public and non-classified information available.  This included specification sheets and websites of the weapons system manufacturers, reports from national defense organizations and NGOs, defense publications (e.g. Jane’s Defense []), and websites aggregating publicly available information (e.g.  A database record was created for each distinct weapon or weapons system; where weapons systems had a wide number of variants, separate records were only created for those sub-types that exhibited a variation in features that contributed to autonomy.  Due to the strategic importance of details of many of these weapons systems and the necessarily secretive nature of the military organizations employing them, data were often sparse, especially for weapons systems not employed by Western nations or not available for general deployment by states other than the state of origin.  The greatest effort was expended to accurately classify these systems, but where classification of certain categories was not possible, those database fields were left blank; for systems with more than two blank fields, categorization was not attempted.  Again, this led to representation for a lower fraction of deployed systems by nations with less open information, especially China.  Database records included date of deployment, country of origin, manufacturer, notes on the system, references used to determine the information used to classify and score the weapons system, and categorization to a specific weapons type.  The categories used were:

  • AAM: Air-to-air missile.
  • SAM: Surface-to-air missile, whether fired from land, sea, or submarine platforms.
  • SDS: Surface defense systems, excluding strategic ballistic missile defense systems.
  • CM: Cruise missile/standoff-weapon, indicating weapons meant to be fired at land targets from a distance where the firing system is not at significant risk from the land targets.
  • SM: Anti-ship missile.
  • AGM: Air-to-ground missile.
  • ICBM: Intercontinental ballistic missile.
  • ARM: Anti-radiation missile, meaning a missile fired from any platform intended to attack radiation based (e.g. radar) sensing systems.
  • GGM: Ground-to-ground missile.
  • BMD: Ballistic missile defense. Surface defense systems specifically designed to counter strategic ballistic missiles (e.g. nuclear-armed ICBMs).

Each of the identified systems was then compared to the list of autonomous features detailed above, with a binary indication of whether that system had the selected feature or not.  The only exception to this was “fire control”, where the levels of fire control were ordinal and coded as follows:

  1. The system cannot fire; i) human must fire; or ii) merely notifies operator (0)
  2. The system notifies a human operator and waits for command to fire; or human identifies and selects target and fires (1)
  3. The system is set to fire on a target unless a human overrides the system (2)
  4. The system fires without human intervention (3)

Composite scores were created for each of the categories of autonomy by employing a weighted sum:


where j is the category index, i is the feature index, sj is the composite score for the jth category, xji is the ith feature of the jth category, and wjiindicates the corresponding weight.
The weights were normalized for each category so as to sum to 1, giving a maximum composite score for each category of 1.  These composite scores would then give an aggregate measure of the level of autonomy along each dimension for each weapons system catalogued, allowing systems and categories of systems to be compared and specific trends in autonomy to be identified.


The first category (“Self-Mobility”) focused on independent movement, and it was composed of the following variables:

  • Mobility: The capability of the system to move or direct itself to a location outside of its original trajectory.  For example, a rocket propulsion system on a missile, a track system and motor on a ground vehicle, and wings with control surfaces on gliding bombs all meet this definition.  Surface-to-air defense systems fixed on a ship do not, as their movement is directed by the overall movement of the ship, not the system itself.
  • Persistence: For a weapons system, the quality of remaining in operation after a weapons payload has been delivered.  This suggests the ability of the system to move to another location to deliver another weapons payload at some point in the future, which is why it is grouped within this category.  An example is a unmanned aerial vehicle (UAV) armed with guided missiles; the UAV has the persistence feature, but the guided missiles on the UAV do not, as they are destroyed in the act of delivering the weapons payload.
  • Homing: The capability of a weapons system to direct itself to follow an identified target, whether indicated by on outside agent or by the weapons system itself.  This property is characteristic of “fire-and-forget” weapons, although weapons that follow an externally generated signal still have this property.  For example, both laser-guided (outside agent) missiles and heat-seeking missiles (weapons-system itself) have the “homing” property.
  • Navigation: The capability of a weapon system to direct itself to an arbitrary geographic point.  This can be realized through inertial navigation systems, global positioning systems, or other guidance systems.

The second category (“Self-Direction”) focused on the level of self-direction in the use of weapons systems, and it was composed of the following variables:

  • Target Identification: The capability of the system to identify a potential target from the background of information in the environment.  When categorizing real systems, this property has been broken into “Defensive Identification” and “Offensive Identification” to distinguish between systems that primarily respond to incoming threats (e.g. surface-to-air missile defense systems) and systems that proactively find targets.
  • Target Image-Discrimination: The capability of the system to categorize identified targets arbitrarily.  For example, this includes the ability of weapons systems to separate ships by size and class and the ability to discriminate between various fixed targets (e.g. buildings).  This has typically been performed by using computer vision to compare images with a database of known targets.
  • Target Prioritization: The capability of the system to rank order identified targets by some pre-defined rubric, such as by the threat they pose to a defended asset or by their strategic or tactical value.  The rubric could be pre-determined by the system’s programmers or could be defined by the system itself.
  • Target Acquisition: The capability of the system to select a target to which it will apply its weapons systems.
  • Weapons selection: For systems that have multiple weapons, the capability of the system to autonomously determine the most appropriate weapon for the chosen target. 

The third category (“Self-Determination”) encompasses the ability of the weapons system to modify or set goals; in short, varying degrees of its ability to autonomously “decide” what it must do next.  It was composed of the following variables:

  • Self-engagement: The ability of the weapons system to self-determine delivery of its weapons payload on an acquired target.
  • Autonomous communication: The ability of the weapons systems to communicate with other autonomous agents to transmit or receive information about changes in its environment or modifications in its goals.  For example, a missile that has the ability to communicate to other missiles that it is delivering its payload to a given target so that those missiles can re-prioritize their target lists has this property.  As another example, a UAV that has the ability to send new target lists to loitering missiles in the air without human intervention also has this property.  However, a missile system that can only receive new targeting information in flight from human operators does not have this property.
  • Goal self-modification: The ability to shift mission goals based on newly acquired information from autonomous sources.  An example is the ability of a cruise missile to change targets based on the self-identification of a higher-value target (e.g. a surfacing submarine) that was not known when it was launched.  The key to this property is that it is the autonomous system itself that determines that the mission should be adjusted, not a human operator in the field.
  • Goal setting/planning: The ability of the autonomous system to set its own tactical goals and missions based on its understanding of strategic objectives.  An example would be an autonomous UAV that plans its own sorties, including target lists and priorities, without intervention of humans beyond the availability of general strategic objectives and reconnaissance information.
  • Learning and adaptation in-field: The ability of an autonomous weapons system to use environmental information in an unsupervised manner to adapt its models of the world and adjust its behavior so as to improve performance, according to its set of objective functions.  This could include adaptation of a subset of the objective functions themselves.


To date, 256 systems have been categorized and scored.  A plurality of the systems are missile-based, due largely to the multiplicity of these systems, their inherent semi-autonomous natures, and the large amount of information publicly available for currently deployed systems.

Figure 1

Figure 1: Development of various autonomous technologies over time

Figure 1 shows the proliferation of various autonomous technologies over time.  It can clearly be seen that Homing emerged earliest and has been included in the most systems.  This makes sense from a technical standpoint, as the underlying technologies are relatively simple (radar, thermal imaging) and form the basis for many of the other technologies (e.g. Target Identification, Target Image Discrimination).  As an autonomous technology, Homing’s main danger is in following an unintended target, and much of the technological development of the past five decades in this field have surrounded making it less likely to be fooled.  Navigation is contained in the second highest number of systems, with the rapid increase since 2000 due to improvements and miniaturization of GPS technology.  The three-member set of Target Acquisition, Offensive Target Identification, and Defensive Target Identification share the same growth rates.  While it may seem like emergence of target selection is a relatively recent concern in the autonomy of weapons, it is clear that from an early date weapons systems were able to positively identify and select a target.  Most of the early offensive systems in this category are Air-to-Air missiles, where it is critical that a weapon be able to discriminate between friendly and unfriendly aircraft and solutions were rapidly developed.  Cruise missiles and anti-ship missiles incorporated these technologies quickly as well to allow for positive identification of their targets once they reached the engagement zone.

The two most recent emerging technologies are Target Image Discrimination and Loitering (i.e. Self-engagement).  The former has been aided by improvements in computer vision and image processing and is being incorporated on most new missile technologies.  The latter is emerging in certain standoff platforms as well as some small UAVs.  They represent a new frontier of autonomy, where the weapon does not have a specific target but a set of potential targets, and it waits in the engagement zone until an appropriate target is detected.  This technology is on a low number of deployed systems, but is a heavy component of systems in development.

Figure 2: Distribution of autonomy in (a) deployed and developmental weapons systems and (b) non-UAV systems

Figure 2A

Figure 2B

Figure 2 shows the self-mobility and self-direction in weapons system scores for all of the systems in the database, with type of system indicated by shape and originating nation indicated by color.  It should be first noticed that self-mobility is more generally incorporated into systems than is self-direction in weapons systems, and that self-mobility is almost always a pre-requisite of self-direction in weapons systems.  This is sensible: it makes little sense to create a system that can make decisions in targeting and deploying weapons if it can’t get to where the targets are.  The main exceptions to this are surface defense systems, both at ground stations and on-board ships.  Notably, these systems were the ones that developed high levels of autonomy in weapons use first, with many having full authority to acquire and fire and targets without a human in the loop by the mid-1980s.  The targeting systems in these platforms is sophisticated, able to identify, track, and prioritize 10s of targets and deploy their systems to counter these targets (anti-ship missiles, long range air-to-ground missiles, cruise missiles).  These systems have incorporated target image recognition into their platforms, and some of the more advanced systems (e.g. LRASM, TARES) are able to choose and prioritize targets in their engagement zones, communicate with other weapons systems, and re-prioritize targets based on information in the field.  The key discriminating factor between these weapons and the emerging UAVs (nEUROn, Taranis) on these indices is that the UAVs are persistent.  Incorporation of the automated (no man in-the-loop) target management platforms of the surface defense systems into these persistent UAVs would allow them to shift those capabilities from defensive to offensive, managing their own weapons as well as standoff weapons entering the battlefield.  This is a likely point for evolution toward more autonomous weapons.

Given their levels of defense spending, it should not be surprising that the nations with the most systems are the United States, Russia, and China.  However, the developing technologies with some of the highest levels of autonomy here represent the EU (nEUROn, TARES) and the UK (Taranis).  Part of the reason for this is that information for recent Russian and Chinese developments in the semi-autonomous weapons space is too weak to have scored and included in the database; both nations are certainly working toward systems that would have similar capabilities as the LRASM and the LOCAAS.  Still, it should be noted that the technological incorporation of autonomy will not necessarily come only from the world’s strongest powers, and the balancing effect that may have will not likely be stabilizing.

Figure 3

Figure 3: Evolution of autonomy in self-mobility (dark blue) and self-direction in weapons systems (light blue) over time

Figure 3 shows how the first two indices of autonomy have developed over time; these are mean scores for autonomy for the systems introduced during these decades (not for all deployed systems).  Self-mobility developed first, with all missile systems by definition being self-propulsive and homing/ navigation being rapidly incorporated to improve accuracy.  Early technologies in self-direction in weapons systems were largely focused on surface defense systems, as mentioned above.  Recent increases are for the incorporation of target identification/recognition and acquisition systems into more technologies.  Still, it should be noticed that even today, most new weapons technologies do not incorporate high levels of self-direction in weapons systems.  Most ground-based or infantry-aiding technologies actually have very low scores for this index and are largely manual in nature.  Target identification and acquisition are not as highly valued in environments where people are actively working, but instead (as shown in Figure 2) these find their way more into technologies where either 1) decisions must be made more rapidly than is possible for a human to counter an incoming threat, as in surface defense systems or 2) the entire point of the weapon is to accomplish its goal without putting human operators in harm’s way, as in standoff weapons.