Understanding face recognition and charting a path forward
Today, individuals, private corporations, and governments are using face recognition, in many positive ways, such as to help tag friends on social media pictures, for access control and other security activities, and to identify terrorists. It’s also one of the most powerful tools in combatting the scourge of child sex trafficking and underage pornography.
However, as with many emerging technologies, there is also a potential to misuse face recognition to cause harm. Such use, even if unintentional, could have a profound and negative impact on our civil liberties. Clearly, this is something we must avoid.
Given its current and future impact, we must find a way to ensure that we continue benefitting from face recognition, while also ensuring that it is not being misused. Doing so first requires that the public, policymakers, and legislators understand the technology.
Unfortunately, face recognition is complex and very technical. So, in order to understand the issues, let’s tackle a few face recognition factors that aren’t always properly understood.
At the most basic level, recognition algorithms compare two facial images and produce a measure of how similar they are — the more similar, the more likely the images are of the same individual. These algorithms do not estimate gender, ethnicity, or age. There are other, separate algorithms that attempt to perform this task, and they use facial images, but they are not performing face recognition.
One of the least understood but most important issues with face recognition is the accuracy of the technology. Accuracy statistics are incredibly complex. It is critical to understand that we must use different metrics depending on how the system will operate.
For some applications and uses, face recognition is remarkably accurate. The technology can easily support verification applications where the system compares a newly taken photo to one in your database, such as unlocking your phone. Open-set identification (where the system tries to determine if you’re in the database and, if so, who you are) is much more difficult. These applications typically use face recognition as a tool — a starting point for humans to further analyze before making any decisions.
The National Institute of Standards and Technology’s (NIST’s) latest Face Recognition Vendor Test (NISTIR 8238) shows that identification performance of the top system is 99.5 percent on a database of 12 million individuals — using mugshot-quality imagery. (Note that I’m oversimplifying metrics here to avoid getting into a complex discussion of multi-dimensional statistics. Real-life conditions will reduce accuracy.)
This amazing level of accuracy is sufficient for targeted surveillance applications within a focused scope — but attempting to track everyone would produce so many errors that the data gathered would be far too inaccurate to use.
Another significant issue with face recognition is the notion of “bias.” Face recognition experts agree that accuracy varies among different ethnic and racial populations, as well as age and gender. But “varies” isn’t the same as what’s normally considered “bias.” Facial structures vary for different ethnic groups. What’s more, the variability within each group is inconsistent. That’s a naturally occurring phenomenon. The ability of face recognition algorithms to differentiate individuals within and across these groups also varies.
More research and data are needed to understand and lessen these issues, just as we’ve done in the past to address age and temporal differences. This insight will also help identify areas of concern so that we can better ensure that face recognition systems and other technologies aren’t applied in a biased manner.
So, what should we do next?
For face recognition specifically, I recommend that we establish a body of individuals who are recognized experts on face recognition, privacy, and civil liberties, but who aren’t advocates for using or banning the technology. This group can help legislators, policymakers, and the press understand the complex issues in a non-biased, rhetoric-free manner, thus enabling them to make decisions based upon an accurate foundation.
We must also understand that similar concerns are creeping up on us in other technologies, such as ubiquitous GPS use, always-listening voice assistants, public and private data aggregation, and artificial intelligence.
At the same time, our privacy and civil liberties controls are multiple decades old — if they exist at all. We need to analyze and update our laws and policies to accommodate this new class of capabilities so that the government and private sectors can leverage them while preserving the fundamental rights of individuals and society.
Face recognition capabilities have rapidly advanced in recent years, which presents new opportunities and concerns unlike anything we have encountered in the past. That can be scary. But if we work together, we’ll find a way forward that benefits and protects everyone.
Duane Blackburn is a Science & Technology Policy Analyst at The MITRE Corporation. He was previously an Assistant Director of the White House Office of Science & Technology Policy during the Bush and Obama administrations, with identity matters as one of his portfolios. He developed the first Face Recognition Vendor Test (FRVT 2000) — the first open, statistically significant assessment of commercial biometric technology, which the National Institute of Standards & Technology still manages today.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.