Autonomous Weapons in the Military – What’s Possible and What’s Legal

Niccolo Mejia

Niccolo is a content writer and Junior Analyst at Emerj, developing both web content and helping with quantitative research. He holds a bachelor's degree in Writing, Literature, and Publishing from Emerson College.

Autonomous Weapons in the Military - What's Possible and What's Legal

The military is always looking for ways to innovate its technology for weapons and vehicles, and it follows that AI and ML would become part of that work in the current decade. Currently, the Army is testing autonomous vehicles and aircraft for battlefield use. However, most AI applications for these vehicles do not have clearance to operate the weapons attached to them. 

In all cases where a military robot is allowed to automatically fire a gun, a human soldier must make the final call. Despite the fact that current AI-enabled weaponry is not yet fully autonomous, the technology to make it that way almost certainly exists. But there are many technological and legal challenges facing this type of innovation.

In this article, we discuss the legality and challenges facing autonomous weapon projects for the military. We highlight the criteria for a legal system and point out the most difficult rules a project like this must satisfy. Additionally, we cover armed vehicles such as unmanned aircraft and tanks that are being developed to run autonomously with AI. Any U.S. military branches currently working on the development of these vehicles are noted as well, including projects that involve outside contractors.

We cover the following topics:

  • The Legality of Autonomous Weapon Systems – An Overview – The criteria every autonomous weapon system must adhere to in order to be considered safe, humane, and legal for international battlefields.
  • Ground Strike Teaming And Attack Planning – Automated air vehicles that can identify airborne and ground targets while maintaining flight formation and planning the next strike.
  • Advanced Targeting Systems – Autonomous weapon and combat vehicle systems that can find targets using machine vision and U.S. Army developed sensor technology.

We begin our exploration of autonomous weapon systems in the military with an overview of their current legality for use on the battlefield.

The Legality of Autonomous Weapon Systems – An Overview

The U.S. Department of Defense (DoD) considers a weapon system to be autonomous when it can recognize targets and engage them automatically without any user input. This type of system is currently illegal, which means every autonomous weapon needs human approval in order to engage targets. While machine learning algorithms for this type of system have only been possible for a few years, their existence has been a concern since machine learning has become more prominent. 

As militaries have begun incorporating automated functions into their weapon systems, it follows that they will eventually try to develop fully autonomous weapons. There are two important categories of law to consider when determining the legality of a weapons system, including those powered by machine learning and AI. 

These categories form the rules regarding the manufacturing and use of weapons themselves or “weapons law,” and laws concerning a system’s ability to target places and people or “targeting law.” Targeting law may also determine any possible cases in which a legal weapons system may be used unlawfully. In order to be lawfully used on a battlefield, an autonomous weapon system must comply with both aspects of the law.

Weapons Law

In order to determine whether an autonomous weapon system is lawful according to weapons law, there are two main factors to consider. The first is whether the weapon system is indiscriminate by nature. This is the case for weapon systems that cannot be aimed at a single, specific target and would be equally likely to hit civilians as combatants.

It is important to note that autonomous weapons are not considered inherently indiscriminate, even if it had clearance to make the final targeting decision. Instead, the system would still comply with weapons law if it is possible to supply it with sufficient data to allow it to aim at specific military objectives.

The second rule of weapons law is that the system in question cannot cause unnecessary suffering or superfluous injury to combatants. This rule is to prevent needless or inhumane injuries on the battlefield. For example, warheads filled with glass, shrapnel, or other granular substance complicate medical treatment and so are deemed unlawful. An autonomous weapons system could be found lawful under this rule so long as each weapon or warhead the system uses complies with it. 

The DoD has made these legal reviews a U.S. policy, and they are to be conducted at the early stages of development and again before the finished product is used in combat. If a system is changed significantly after its initial fielding, an additional review may be necessary. Any autonomous weapon system or changes to one would require this type of review.

Targeting Law

Once a new weapons system passes its legal weapons law review, it must still be examined under targeting law to make sure actual use of the weapon would not be prohibited for some reason. To conduct this examination, there are three core laws of armed conflict which are particularly important. These include:

  • Distinction, or the ability to distinguish between combatants and civilians
  • Proportionality, or the assurance that any collateral damage caused by the weapon is not excessive in relation to the anticipated military gain
  • Attack precautions, or the ability for the autonomous system to take all feasible combat precautions into account. This includes verifying that each target is a military one.

Distinction

Distinction is the first requirement of targeting law, and is the most fundamental principle within the law of armed conflict. Distinction requires a combatant to be able to distinguish between military combatants and civilians, as well as between military and civilian objects.

This law is intended to protect the civilian population of or near a battlefield by directing attacks against only military targets. 

he context in which the weapon system is to be used may play a significant role in determining if it satisfies the distinction law.

Because of this, there may be situations in which autonomous weapons could satisfy distinction law with a considerably low capability to distinguish between civilians and military targets. One example would be high-intensity conflicts that happen in remote regions like deserts or underwater. 

More complex situations would include urban environments or dealing with counterinsurgencies on the battlefield. These would require a much more advanced ability to distinguish between military combatants and civilians. This is likely one of the biggest challenges for the U.S. Military and their defense contractors when developing AI-powered weapons for the battlefield. 

Proportionality

The second requirement is proportionality. This requires combatants to examine whether the expected collateral damage from an attack would be excessive. The excessivity of the damage is measured in relation to the anticipated military gain from completing the attack. This is a complex issue which reflects customary international law, and has usually involved a human making the final judgement call. 

It is still unclear if autonomous weapons system will ever be able to safely fulfill this requirement on their own. In order to comply with this principle, autonomous systems would need to be able to estimate collateral damage from a strike, and then compare that with the army’s perceived value of completing the attack. This type of measurement is highly contextual, and would require a machine learning model to be trained to infer the strategic value of certain battlefield locations and enemy camps. 

Because the value of dispatching combatants and distinct points on the battlefield can be highly volatile, human operators may need to develop mechanisms which regularly update these values. They would also need to provide the autonomous system with the relative strategic value of a given target. 

If a weapon system cannot maintain compliance with proportionality for a general battlefield setting, the developers may suggest strict contexts under which the system may be used. The developer and the state must agree that the system is within compliance of the law, including any stipulations on when it can be used, in order to pass this part of the review.

Attack Precautions

The final targeting law requirement for autonomous weapon systems is the obligation to take all feasible precautions when attacking. These precautions could be challenging for autonomous systems because they would need to be able to do everything feasible to verify that an individual is a military target and not a civilian.

The word “feasible,” in the context of targeting law refers to that which is practically possible when taking all prevailing circumstances into account. These include humanitarian and military considerations, such as keeping civilians or hostages safe.

There may be some situations where the automatic target recognition (ATR) capabilities of an autonomous weapon system would be more reliable than a human in fulfilling this requirement. For example, a human soldier may not be able to recognize all five of the targets stationed in front of a target encampment, but ATR may be able to using machine vision and heat signatures.

Another challenge for autonomous weapon systems would be the requirement to do everything feasible to avoid and minimize collateral damage during an attack. This may preclude the usage of such a system if a different type of system would better protect civilians. Because there is an intrinsic value judgement within each type of precaution, there are likely numerous situations in which an autonomous system cannot be used with our current laws.

Ground Strike Teaming And Attack Planning

Some Autonomous weapon systems take the form of unmanned aircrafts that use cameras to record the environment around them and relay it back to a human operator. Additionally, these planes are able to detect other allied aircrafts around them and plan ground strikes from above.

This type of system would require at least two types of AI software to function properly. These include machine vision for visualization and target recognition, and predictive analytics for assessing threats and deciding on the best course of action for dispatching them. 

Machine Vision

A machine learning model for an autonomous aircraft’s machine vision portion would need to be trained on thousands of hours of in-flight footage from previous fighter pilots in combat. This includes footage of the ground below, the airspace around the fighter jet, and strategic movement choices made by the team. Each important element in this footage would need to be labeled, including civilians, enemy combatants, enemy aircrafts, and buildings that would cause significant collateral damage if destroyed. 

Once trained, the machine learning model should be able to distinguish between civilians, allied soldiers, and enemy combatants. Additionally, it should be able to detect threatening entities such as enemy aircrafts and ground vehicles along with buildings. With machine vision, an autonomous aircraft for ground strikes may be able to fly on its own and follow a team of human fighter pilots to fight alongside them.

Predictive Analytics

The predictive analytics portion of an autonomous aircraft for ground strike teaming and attack planning would need to be trained on a vast amount of data of various types to function effectively. In the context of air combat, “strike teaming” refers to the decision making process of dividing a large group of soldiers into smaller teams during a mission. This requires training data from past air combat engagements that includes the routing of each pilot, the number of pilots on each team, and some context into why the teams were split that way. 

Contextual data could include visual information from the machine vision portion of the aircraft, as well as radio commands from home base that help the pilots know exactly where to go. Additionally, the aircraft’s machine learning model would need to be trained on air strike strategy and how it affects the numerical “value” of each target or building in the sights of the machine vision camera. 

Once the machine learning model is fully trained, it would be able to apply the air strike strategy data to whatever the machine vision camera can currently visualize. This would help it track targets and decide the best way to dispatch them in the case where human soldiers could not signal it properly to follow their own plan. Additionally, the aircraft could follow other teams of fighter pilots and strike when it is ordered to or the entire team is ordered to open fire. 

Lockheed Martin’s Have Raider

One defense contractor that has worked with the U.S. military on an autonomous weapon aircraft system is Lockheed Martin SkunkWorks. The military wanted to enable rapid actions and responses from manned and unmanned teams of aircrafts as soon as new combat information becomes available. They were particularly concerned with successful teaming because of its potential to increase combat efficiency and situational awareness for each warfighter.

Lockheed Martin demonstrated their AI-powered Have Raider for the U.S. military in 2017, and the vehicle had to complete four important tasks to prove its manned/unmanned ground strike and teaming capabilities. These four tasks are as follows:

  • Plan a ground strike mission automatically
  • Execute tasks prioritized by a human operator
  • Replan the mission on a whim in order to minimize exposure to identified threats 
  • A good demonstration of autonomous route following, rejoining, and formation flying.

Shawn Whitcomb, Lockheed Martin Skunk Works Loyal Wingman program manager, spoke on the results of the Have Raider demonstration. Regarding the success of this demonstration, Whitcomb said, 

“The Have Raider II demonstration team pushed the boundaries of autonomous technology and put a fully combat-capable F-16 in increasingly complex situations to test the system’s ability to adapt to a rapidly changing operational environment. This is a critical step to enabling future Loyal Wingman technology development and operational transition programs.”

The Loyal Wingman program is a Skunk Works initiative for developing autonomous aircraft for combat and other military missions.

Lockheed Martin used a repurposed F-16 fighter jet for their demonstration and was able to automatically plan and carry out air-to-ground strike missions, fly in formation with a manned aircraft, and react to changing threat levels as the environment changed during flight.

Making use of this type of machine learning-enabled flight and teaming may reduce the workload for soldiers in flight. This could allow them to react more quickly to changes on the battlefield and keep them safe. 

Advanced Targeting Systems

The U.S. Army is currently working with proposals from multiple defense contractors to create a fully automated ground vehicle that can fight with human soldiers. They refer to their current model as the Advanced Targeting and Lethality Automated System (ATLAS). The latest test on the ATLAS system was during 2017 where they held the first demonstration of its kind.

The initiative to develop ATLAS is lead by the Army’s Night Vision and Electronic Sensors Directorate (NVESD). They likely incorporate their censor technology into the project to enable accurate readings and assist machine vision.

This is called the Integrated Sensor Architecture (ISA). This architecture allows sensors to share information to each other as well as human-operated computers, without needing physical point to point integration. The 4-minute video below explains more:

Even if the ISA may not end up directly integrated into the ATLAS vehicle as part of the final product, the architecture is likely invaluable for training machine vision models. This is because the data scientists training the model likely have access to hundreds of thousands of recorded combat situations from multiple vantage points, which can then be labeled and used for training. 

The machine learning model for ATLAS’ machine vision portion would be trained on battles in the same environment in which it is to be deployed, as well as select battles that the Army suspects will be similar to future important ones.

While it is unclear when the Army will begin deploying armed ground robots like the ATLAS project, their use hinges on performing exceptionally during tests and demonstrations. 

At the Association for Unmanned Vehicle Systems International (AUVSI) conference, Paul Rogers, director of the Army’s Tank Automotive Research Development and Engineering Center (TARDEC) said,

We are on that path of exploration. It’s experiment, test, feedback, [address] concerns, challenges and limitations. You go through an iterative approach. But the faster we can turn those iterations, the better.’” Additionally, he says the army is in a phase of “constant, rolling, and evolving experimentation.

At a U.S. Army industry day for ATLAS, each aspect of the technology required to develop it was discussed. Each have their own possibilities for AI applications, but only a few of them actually made mention of it or machine learning algorithms. The sections of the day that included mention of machine learning and artificial intelligence are as follows:

  • Image Processing Topics, including AI/ML algorithms and automated image search
  • Data Collection, including managing that data, organizing it within databases, and using it to train ML algorithms
  • Fire Control, or advanced targeting algorithms

While the feedback from the ATLAS industry day is unclear, it is important to note that AI and ML were spoken about as integral parts of the project. Currently, the NVESD sensors are not directly built into any aspect of the tank’s AI technology. However, the idea is not entirely out of the question as it could bolster ATLAS’ situational awareness and ability to locate hostile targets.

Header Image Credit: ZDNet

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe