Facial Recognition in the Military – Current Applications

Millicent Abadicio

Millicent is a writer and researcher for Emerj, with a career background in traditional journalism and academic research.

Facial Recognition in the Military – Current Applications

The US Department of Defense’s DARPA has a plan to invest as much as $2 billion in artificial intelligence research and development in the next 5 years. This is on top of the $2 billion the federal government has already spent on AI-based technology R&D.

There is undoubtedly much interest in the US for plumbing the depth and potential of various types of AI technology for defense and military use, and it is a significant step in the right direction for catching up in a technological race with China. However, it may not be enough.

Deputy Chief of Staff of the Intelligence, Surveillance, and Reconnaissance of the U.S. Air Force, Lt. Gen. VeraLinn Jamieson, said China has already spent $12 billion on AI R&D in 2017 and estimates this may be as much as $70 billion in 2020.

The main beneficiaries of this investment by the Chinese government are AI startups, and this is most evident in the facial recognition arena. Cameras are everywhere in China, and citizens accept being almost continually monitored. As a result, the development of AI-based facial recognition algorithms has taken off.

DARPA has a problem, however. While a majority of people in the US has a tolerant attitude towards the use of facial recognition in some civil uses such as airports, retail stores, and public areas, military use is a different matter. Many people are wary about the use of facial recognition and other AI-based technology in a military context.

The wariness stems from the purpose of capturing the image or video. In a retail context, for example, cameras capture images and footage to track conversion. Retailers are not interested particularly in individuals, but their buying behavior.  In our interview with Capillary Technologies CEO Aneesh Reddy, he stated:

We do not capture any personally identifiable information. What it does is get a quick grasp on what your demographics [are].

In a military context, its purpose is to identify, classify, verify, and if needed, neutralize any perceived threat. In the interest of security, this may seem a reasonable application. However, a major concern with facial recognition is its accuracy.

The National Institute of Standards and Technology (NIST) conducted accuracy testing for different facial recognition algorithms and concluded that the 28 algorithms tested had just a 0.2% failure rate. However, other research indicates the technology is not accurate enough to identify people correctly and exhibits some bias, particularly for people with dark skin.

Such inaccuracies, when coupled with automated lethal force, can lead to fatal mistakes. Concerned parties are calling for regulations, but the form these will take is under vigorous debate.  

These concerns have Google employees reacting strongly to Project Maven, an ongoing contract with the DoD for AI-based drone surveillance. This pushed the company to decide not to renew it when it expires in March 2019. Many employees believe the company should avoid engaging in the “business of war.”  

This belief also pushed Google to drop out of bidding for a much larger, single contractor DoD project dubbed the Joint Enterprise Defense Infrastructure (JEDI) for the Pentagon. Worth about $10 billion over 10 years, other tech giants such as Amazon, Microsoft, and IBM are still vying for the lucrative contract of migrating the Pentagon’s data to the cloud as well as other contracts that specify the use of facial recognition.

Google is the only company to have a problem with working with the military, however. Many of the biggest tech companies continue to work with the military.

Aside from big tech companies, the military is also looking at talent from smaller private companies and academia to meet the challenges of an AI-based battlefield. The US Army formally launched its Artificial Intelligence Task Force on February 1, 2019, slated as a network of AI experts in private enterprise and academic institutions. The task force has headquarters in Carnegie Mellon University.

This article will discuss current applications of facial recognition in the military.  These applications will tackle the common sources of errors in recognizing or matching faces, including differences in angles, scale, illumination, and resolution, as well as the scarcity of training data.

Adaptive Technology and Detecting Perceived Threats

Facial recognition is a non-contact method for identity search and verification. Images and video may be captured without interaction with the subject, which makes it an efficient and effective security method. However, facial recognition is not always accurate.

The accuracy depends on the ability of the software to detect and match nodal points of the face. Most facial recognition algorithms need copious amounts of data to train them and have sensitivity to differences between the stored and target images.

When the data is not available to train the algorithm to recognize patterns sufficiently or the target image is fuzzy or taken under unfavorable conditions, this impairs the ability of the software to attain a high level of accuracy. These are problems in real-world situations.

To counteract this, the Naval Air Warfare Center Weapons Division in China Lake developed adaptive facial recognition software. The brainchild of Katia Estabrides, the software requires much fewer data to train the algorithm and had the ability to incorporate and use new data as it comes in.

The software would also be able to tolerate differences between stored and target images such as illumination, angle, and scale, especially when more than one sensor comes into play.

This adaptive technology requires a minimum of one electronic processor associated with a classification processing tool, one database containing several images of known subjects, and one test image of an unknown subject, according to the center. The system creates a dictionary to classify the test image against multiple images of known subjects until it makes a match.

A good use for this type of software is to identify friend from foe. The military typically keeps multiple records of identification of their personnel, which certainly includes photos. The software can quickly identify faces that are not in its database, tag it, and alert personnel about the presence of unauthorized personnel. This can be a valuable asset in an active battlefield.

While the software focuses on facial recognition and has military applications in this guise for identifying authorized personnel, it may also be adapted to recognize ships and electronic warfare-specific emitters or signals as needed.

Thermal Imaging and Convolutional Neural Networks

The military has a problem when it comes to identifying faces in low light or night conditions. Standard facial recognition software relies on visible details in an image or video to make a match, and even then, some algorithms do not achieve accurate results. When a subject’s face is in shadow or not visible at all because of the lack of a light source, the lack of details will make accurate facial recognition unlikely.

The military makes use of thermal imaging to detect the presence of a person, and it is possible to capture the image of a recognizable face. However, conventional facial recognition will still be unable to make an accurate match, as visible details would not be available.

To address this problem, scientists at the U.S. Army Research Laboratory came up with a method using the visible spectrum and existing facial recognition software to generate a visible space from a thermal image. Thermal images plot hot and cold areas in the face, which is enough to generate important information for the right method of synthesis.

Below is a 40-second video demonstrating how the method works:

The scientists used a convolutional neural network, or CNN, a type of deep learning algorithm, to put in the missing details in local regions of the face, i.e. eyes, nose, mouth, to synthesize a face. CNNs work similar to the human brain in that it can extrapolate a picture from a small amount of data by assigning values to certain aspects of an incomplete image and making connections. While the result is not photo-realistic, there are enough key points or landmarks for facial recognition software to make an accurate match in many cases.

Benjamin Riggan was one of the researchers that proposed this method. He stated, “When using thermal cameras to capture facial imagery, the main challenge is that the captured thermal image must be matched against a watch list or gallery that only contains conventionally visible imagery from known persons of interest.” By using a different method of synthesis, Riggan and his associates have helped make it potentially possible for the military to identify faces in the dark.

Drones and Video Footage Analysis

Unmanned aerial vehicles, also known as drones, have been used extensively by the military for various reasons. Drones became an essential part of military operations with the invention of the MQ-1 Predator drone and were used over Afghanistan in late 2001, and allowed reconnaissance, surveillance, and intelligence gathering over inaccessible or dangerous areas.  Most of the 8,000 or so drones in the US military’s inventory are capable of shooting videos, and some of the bigger ones can even take high-resolution photos.

The problem is processing all this video and photos in a timely manner. It takes human analysts a long time to go through reams of videos and photos captured by these drones, and the situation could change in that time.

To increase the efficiency of the analysis, the military turned to artificial intelligence technology to tackle this challenge. It specifically contracted Google to build AI software using its TensorFlow AI systems as part of the Algorithmic Warfare Cross-Function Team or Project Maven.

The military contract involves using the AI software to analyze footage, detect threats and earmark objects of interest for review by human analysts, which will then be the basis for making military decisions. The DoD claims the information derived from machine learning-assisted analysis can help minimize collateral damage, mitigate threats and keep soldiers on the ground safe.

Tireless Sentries and Military Bases

A surveillance system atop Edwards Air Force Base, courtesy of Edwards Air Force Base
A surveillance system atop Edwards Air Force Base, courtesy of Edwards Air Force Base

Most people take it for granted that military bases have ultra-tight security, and they would be right. However, it takes some doing as the sheer size of most of these bases means a significant investment in personnel. This could change in part due to AI.

Major military bases today take full advantage of technological advances to ensure physical security, including artificial intelligence.

The key is the strategic placement of a wide network of sensors and cameras connected to machine vision software designed to spot anomalies in the footage, which alerts human operators. At Edwards Air Force Base, for example, a system of ground-based radars sweeps over its 308,000 acres.

Human operators need to monitor these constantly, but it is easy to miss one thing out of a 100 at the end of a long shift or during a momentary distraction. AI software picks up the slack and increases the effectiveness of human operators.

Secret Service and the White House Trial

Facial recognition is rapidly gaining favor as an efficient security feature in airports, cutting down the time and effort it takes to go through security for both passengers and airport personnel. All of the commercial AI-based software used by these airports appears to deliver what they promise.

It would seem likely, then, that the US government would take advantage of these readily available entry and perimeter security tools by implementing them in critical areas such as military bases. Strangely enough, that is not the case. According to the Defense Biometric Identification System (DBIDS) website, military installations only use identity cards and fingerprint biometrics at the gate.

The US Secret Service, however, is trying something new. It recently tested facial recognition software using footage from the existing closed circuit television (CCTV) system in the White House with a test population of Secret Service volunteers. The test was part of the Facial Recognition Pilot or FRP program to ascertain the accuracy of the software in identifying the volunteers in public spaces.  The volunteers represented known “subjects of interest” that the software will eventually have to identify.

The November 2018 test was the first phase of the planned implementation of facial recognition technology in the White House to replace the Uniformed Division Officers and Special Agents it currently uses to identify these subjects of interest in and around the grounds.

Further tests will continue until August 2019, at which time it will determine if it will go forward with implementing facial recognition technology as it complies with terms of the Privacy Impact Assessment of the Office of Technical Development & Mission Support. The document does not specify the specific facial recognition software used in the trial.

 

Header Image Credit: Engadget

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe