TSA’s Plans to Expand Facial Recognition at Airports: A Privacy Perspective

Nandita Rao Narla, Head of Technical Privacy and Governance at DoorDash
Author: Nandita Rao Narla, head of technical privacy, DoorDash
Date Published: 15 June 2023

The US Transportation Security Administration (TSA) is expanding its facial recognition pilot program used at airport screening checkpoints from 115 security lanes to 200 lanes by the end of the year. The program uses Credential Authentication Technology with Camera (CAT-2 ID system), which compares live photos of travelers taken at airport security checkpoints with the photo on their driver’s license or government ID card. The system also supports the phased rollout of digital IDs, including mobile driver’s licenses.

The pilot program began at Ronald Reagan Washington National Airport amidst the 2020 COVID pandemic push for contactless services and is currently deployed at 16 airports. TSA says this automated facial recognition program brings accuracy levels close to 100 percent from the mid-80s, with human agents looking at a facial match. The technology is also expected to make identity verification faster, saving each traveler a few seconds to a minute.

Privacy concerns about leveraging facial recognition by TSA

However, there are several potential concerns with the expansion of this program. I will outline several of them below:

Biometric surveillance. Biometric surveillance technologies collect and analyze biometric data for all individuals who enter the space where it is deployed, even if the data are later deleted. This creates a perception of being watched constantly and leads to “chilling effects,” which restricts fundamental rights and freedoms. Over 2 million travelers pass through TSA checkpoints daily, and deploying facial recognition technologies at this scale raises concerns over government access to such large volumes of data. In February, five senators sent a letter to the TSA demanding the agency halt this program because “increasing biometric surveillance of Americans by the government represents a risk to civil liberties and privacy rights.”

Algorithmic bias. A 2019 study by the National Institute of Standards and Technology tested 18 million photos of over 8 million people and found that Asian and African-American people were up to 100 times more likely to be misidentified than white men by facial recognition technology. The study also found that Native Americans had the highest false-positive rate of all ethnicities. Women were more likely to be misidentified than men, and the elderly and children were more likely to be misidentified than other age groups. Algorithms from the United States also showed high error rates for “one-to-one” searches of Asians, African-Americans, Native Americans and Pacific Islanders. TSA has not released data on its facial recognition false-positive rates, and concerns about demographic equitability remain.

Inadequate consent. The facial recognition pilot program is currently optional, and travelers can opt out by using a lane without this technology. However, it is unclear if travelers can provide informed consent for facial recognition and are aware of their rights to opt out of this technology without encountering adverse experiences such as longer wait times. The agency’s 2022 roadmap vision also states that “TSA continues to expand its capabilities, including biometrics, to validate and verify an identity and vetting status in real-time (biometric capture only occurs where required or when individuals opt-in).” The “where required” use cases have not been specified.

Lack of transparency and assurance. TSA says facial images are deleted immediately after identity verification. The public Privacy Impact Assessment (PIA) mentions that scanned and live images are retained only until the next transaction is processed or when the Transportation Security Officer (TSO) logs off the system. Additionally, the system auto logoff is set at 30 minutes of inactivity. However, independent audits have yet to be performed to validate these claims.

Insufficient security controls. In some cases, facial images may be retained for up to 24 months for testing and performance evaluation purposes. The extended retention period raises additional concerns about the effectiveness of security controls for such sensitive data. In 2019, the Department of Homeland Security disclosed that photos of travelers were taken in a data breach, accessed through the network of one of its subcontractors.

Risk mitigation measures and technical safeguards

The PIA details several safeguards and risk mitigations mechanisms that are in place to address privacy risks, such as privacy training for TSA personnel, access provisioning on a need-to-know basis, adoption of federal data encryption standards for all data in transit and at rest, limits on the use of personal information temporarily stored, and deleting images after identity verification. The agency also claims to have data minimization practices, such as not collecting facial data by default, where the camera turns on only when the traveler scans their physical or digital ID.

We need more accountability and transparency to address the skepticism around TSA’s facial recognition program expansion. Independent testing and audits can be used to ensure privacy is protected and provide assurance that the technology is not disproportionately impacting certain groups.

The privacy vs. security tradeoff debate: to be continued

Along with this facial recognition program expansion, TSA is running another pilot at select airports where participating travelers would not be required to scan their identity documents at all. Delta Air Lines’ optional TSA PreCheck Digital ID allows travelers to store their TSA PreCheck Known Traveler Number or Global Entry Number in their SkyMiles profile in the Delta app. It uses facial recognition to perform one-to-many matches comparing travelers’ live photos to a database of photos the government already has, typically from passports. If they opt into the program at check-in, they can use only their face to verify their identity without presenting their physical ID, digital ID or boarding pass. The expanded use of facial recognition also needs to be evaluated so we do not compromise privacy for public safety at airports. Privacy vs. security is a false tradeoff, and technologies leveraging biometrics should be designed with privacy in mind.

About the author: Nandita Rao Narla is the Head of Technical Privacy and Governance at DoorDash, where she leads the privacy engineering, privacy assurance and privacy operations teams. Previously, she was part of the founding team of a data visibility and data risk intelligence startup NVISIONx.ai. As an Advisory manager at EY, she helped Fortune 500 companies build and mature privacy, cybersecurity and information governance programs. Nandita serves on the advisory boards for Extended Reality Safety Initiative (XRSI), Techno Security & Digital Forensics Conference, and IAPP - Privacy Engineering. Nandita holds an MS in Information Security from Carnegie Mellon University, a BTech in Computer Science from JNT University, and privacy and security certifications such as FIP, CIPP/US, CIPT, CIPM, CDPSE, CISM, CRISC and CISA.