Face Recognition and Privacy: What You Should Know

Face Recognition and Privacy: What You Should Know

by Lauren Jonik

As technology continues to evolve and redefine our public and private lives, new questions arise about how best to use the tools at our disposal. Face recognition software has the potential to be incredibly useful for law enforcement, but if misused, it may have the power to become deeply problematic.

The Perpetual Line-Up, a report by Clare Garvie, Jonathan Frankle and Alvaro Bedoya at Georgetown Law – Center on Privacy and Technology, investigates the use of face recognition software. Associate Ms. Garvie was kind enough to be interviewed about this very important topic.

Lauren Jonik: You write, “Face recognition is a powerful technology that requires strict oversight. . . But those controls by and large don’t exist today. With only a few exceptions, there are no laws governing police use of the technology, no standards ensuring its accuracy, and no systems checking for bias. It’s a wild west.” Why hasn’t facial recognition technology been regulated?

Clare Garvie: The use of face recognition by law enforcement—particularly at the state and local level—has largely been unknown. We estimate conservatively that one out of every four law enforcement agencies in the U.S. have access to a face recognition system—either through purchase or through a law enforcement information and technology network. And, over half of all American adults are in a face recognition database that police can either search directly or request a search on for criminal purposes—simply because they got a driver’s license.

Yet you ask someone renewing her license at the DMV in Pennsylvania, Florida, Michigan or any one of 28 states across the country, chances are she won’t know that her photo will be used in criminal investigations like this.

LJ: Do you think the use of this technology is an example of how people unknowingly give up their privacy without realizing what’s happening?

CG: I think that is a part of it. One of the reasons that face recognition has become so common with so few controls is because people aren’t aware of its use by law enforcement.

It is important to realize that face recognition is a tool—depending on how it is used, it can be a powerful way for law enforcement to more effectively identify suspects and ultimately make our communities safer. However, it is a powerful tool. It can be used to identify someone or groups of people remotely, without their knowledge, which makes it uniquely appealing as a surveillance tool. So without transparency or externally imposed controls, the temptation might be to use it in ways that do violate the public’s expectations of privacy.

LJ: Have there been any cases through the courts yet that address biometrics like face recognition?

CG: As far as I’m aware, no court has dealt directly with questions regarding face recognition, such as its validity as a tool for generating a positive identification or its evidentiary weight. However, there have been a few cases that address face recognition as an analogy for less widely accepted DNA testing.

For example, in People v. Johnson, 139 Cal. App. 4th 1135 (2006), the court held that a “cold hit” DNA match—where suspect was identified by a database search/trawl—used to identify defendant as a possible rape suspect was not subject to Kelly/Frye admissibility standard because the search merely provided law enforcement with an investigative tool, not evidence of guilt. In its discussion, the court analogizes to a face recognition search on a surveillance camera capturing a robbery on tape allegedly committed by “Joey.”

The court states: “Whether facial recognition software is discerning and accurate enough to select the perpetrator or whether it declared a match involving many different people who resembled ‘Joey,’ or how many driver’s license photographs were searched by the software, is immaterial: what matters is the subsequent confirmatory investigation. Stated another way, the fact that the perpetrator’s features appear to match those of someone in the DMV database does not affect the strength of evidence against ‘Joey;’ it is simply a starting point for the investigation.”

Similarly in People v. Collins, 15 NYS 3d 564 (Sup. Ct. 2015), the court held that evidence derived from certain DNA sequencing procedures is not generally accepted in the DNA scientific community and thus not admissible in court under the Frye test. The court uses face recognition as an analogy: “In that regard, the results of some other techniques—polygraphs and facial recognition software, for example—likewise can aid an investigation, but are not considered sufficiently reliable to be admissible at a trial.”

LJ: What are the legal implications of face recognition technology being used in public spaces like airports? Does this technology inherently alter the role of those working for the TSA?

CG: Face recognition as discussed in the news refers to a number of different processes. One is face verification—making sure someone is who she says she is by comparing her picture to her credentials or information on file. Another is face identification—searching for an unknown person’s identity by comparing his photo to a database of photos on file.

Face recognition is likely conducted at airports by law enforcement agencies to search for wanted persons. In addition, there are various pilots and programs in airports across the world and in the United States that are exploring the use of face verification to augment existing security measures or replace boarding passes. I expect this will become common.

Privacy and civil liberties questions begin to arise when these systems are subject to “mission creep” and begin to be used in ways they weren’t originally set up for or connected to law enforcement procedures.

LJ: If for example, the software detected a wanted fugitive boarding a plane—like a parent who didn’t pay child support—but who was not on a no-fly list, could that person be apprehended?

CG: Depending on how the system was set up, this could be possible. It all depends on what information is connected to the person boarding the plane—this would require a database of all parents owing child support to be shared with the agency or airline running face recognition or face verification at the airport.

But in theory this could be done without face recognition by connecting airport security checks to information about each passenger’s criminal or civil record because we already have to verify our identity when flying.

LJ: What role does algorithmic bias play, particularly with regard to people of color?

CG: Companies and agencies have repeatedly said that face recognition is race-blind—it is ultimately comparing a mathematical calculation, not necessarily looking at skin tone. But just because a system is race-blind does not mean it is race neutral.

One of the few studies that has been done on face recognition accuracy rates across different races found that algorithms tended to perform less accurately on African Americans than on white people. What this actually means is that the algorithm is less likely to find a match in its corresponding database for a probe photo of a person of color. But since most systems aren’t designed to give no returns, and instead will give a list of multiple possible candidates, this also means the systems may be more likely to return a set of completely innocent candidates when the algorithm is searching for an African American.

More research needs to be done in this area—this study was conducted in 2012. But even if racial biases are being corrected for in newer algorithms, these issues may persist in some deployed systems. And coupled with disparate arrest rates and disparate policing rates across the U.S., this means face recognition systems may perform worse on the demographic on whom they are used the most.

LJ: What are some of the broader surveillance implications of the use of facial recognition software—for example, the right to legally, peacefully assemble and protest?

CG: Face recognition is a remote biometric—it can be used to identify someone, or groups of people from far away and in secret. This makes it a particularly powerful surveillance tool—it turns surveillance cameras into tools of identification. Law enforcement can identify people at a protest and connect them to their prior criminal histories, their social media presence, their address. The concern is that this will become a tool of social control, either intentionally or as an unintended consequence.

Law enforcement agencies themselves have stated that face recognition comes with the risk of chilling free speech by causing people to alter their behavior in public, to not participate in First Amendment protected activity that may be surveilled.

Another concern is that as the technology improves, it is becoming possible to do this in real-time, or near real-time, meaning that it can be used to locate where a person is at a given moment and as she moves throughout public spaces by scanning her face.

LJ: As a culture, what do you think the balance should be between empowering law enforcement to do their jobs while ensuring the rights of individuals to maintain their privacy and be presumed innocent until convicted of a crime?

CG: I believe face recognition technology can and should be used by law enforcement to aid in investigations. However, I also think there are common sense controls that can and should be put in place. I believe legislatures should ensure that the public is informed and plays a role in deciding when and how the technology is used. This happens through dialogue, legislation or regulation and public use policies, audits, and reporting.

LJ: What level of transparency regarding the use of the technology would be appropriate?

CG: I believe there should be total transparency in what technology is purchased, how and when it can be used, and how often it is used. Our report recommendations, model use policy and model legislation outline this and the previous answer in more detail.


Lauren Jonik is a writer and photographer in Brooklyn, NY. Her work has appeared in 12th Street, The Manifest-Station, Two Cities Review, Amendo, The Establishment, Bustle, Calliope and Ravishly. When she is not co-editing TheRefresh.co, she is working towards her Master’s degree in Media Management at The New School.

Follow her on Twitter: @laurenjonik.

 

Clare Garvie is an associate with the Center on Privacy & Technology at Georgetown Law. She was a co-author and the lead researcher on The Perpetual Line-Up: Unregulated Police Face Recognition in America, a report that examines the widespread use of face recognition systems by state and local police and the privacy, civil rights, and civil liberties consequences of this use. Her current research continues to focus on how face recognition impacts the average citizen, and the ways activists, public defenders, and policymakers can ensure that the technology is under control. She received her J.D. from Georgetown Law and her B.A. from Barnard College. Prior to her current research, she worked on national security and human rights issues.

Comments are closed.