Story at a glance

  • Law enforcement in the United States has made extensive use of facial-recognition technology to identify suspects.
  • But a landmark federal study now finds evidence of bias based on race, gender and age within these systems.
  • The results confirm those of earlier studies and raise fundamental concerns about the expanding use of facial-recognition.

Facial-recognition technology is more likely to misidentify people of color than white people, according to the results of a sweeping federal study. The results call into question the expanding use of such algorithms in law enforcement across the United States, the Washington Post reports

The study of nearly 200 facial-recognition algorithms from 99 organizations found that Asian and African American people were up to 100 times more likely to be incorrectly identified compared to white men. Of all ethnicities, Native Americans were most likely to be misidentified. 

Women were also more likely than men to produce a false-positive identification, as were children and the elderly, the study found. Overall, the facial-recognition techniques were most accurate for middle-aged white men. 

The accuracy of the facial-recognition systems varied widely depending on the algorithm being tested and the kind of search being performed.

African-American women were misidentified more frequently by the types of searches most often used by police, in which an image is compared to a huge database of others in hopes of determining the suspect’s identity. 

If an algorithm was created in the U.S., it tended to produce more errors identifying Asians, African Americans, Native Americans and Pacific Islanders on the “one-to-one” type searches that are used to unlock cell phones and to pass through security in some airports. These sorts of errors present significant security risks, and, in a worst-case-scenario, could result in the arrest of an innocent person.

The report found “empirical evidence” that the majority of the algorithms studied contained “demographic differentials” that make them less accurate for people of different ages, races and genders.

The results cast a long shadow over law enforcement’s fast-growing use of the technology. For example, the FBI has conducted more than 390,000 facial-recognition searches using state driver’s licenses and other databases since 2011, according to federal records

Cities in California and Massachusetts banned facial-recognition use by public officials this year. California also prohibited its use in police body cameras in the state.

The study corroborates the findings of prior research, which also found alarmingly high error rates, that previously drew methodological criticism from companies including Amazon. Notably, Amazon’s algorithm was not evaluated in the new report, because the company declined to submit it for study. The companies whose products were evaluated included  Idemia, Intel, Microsoft, Panasonic, SenseTime and Vigilant Solutions.

Patrick Grother, one of the researchers who helped produce the report, wrote in a statement that he hoped its findings would be “valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms.”

Several lawmakers called on the Trump administration to put a stop to current plans to expand the use of facial-recognition in the U.S.

"Even government scientists are now confirming that this surveillance technology is flawed and biased," Jay Stanley, a senior policy analyst at the American Civil Liberties Union, told the Post. "One false match can lead to missed flights, lengthy interrogations, watchlist placements, tense police encounters, false arrests, or worse."

Published on Dec 20, 2019