Statement of Bertram Lee, Counsel for Media and Technology – Hearing On “Facial Recognition Technology: Examining Its Uses By Law Enforcement”
STATEMENT OF BERTRAM LEE, COUNSEL FOR MEDIA AND TECHNOLOGY
THE LEADERSHIP CONFERENCE ON CIVIL AND HUMAN RIGHTS
U.S. HOUSE OF REPRESENTATIVES
COMMITTEE ON THE JUDICIARY
SUBCOMMITTEE ON CRIME, TERRORISM, AND HOMELAND SECURITY
HEARING ON “FACIAL RECOGNITION TECHNOLOGY: EXAMINING ITS USE BY LAW ENFORCEMENT”
JULY 13, 2021
Chairwoman Jackson-Lee, Chairman Nadler, Ranking Member Jordan, Ranking Member Biggs, and members of the subcommittee: My name is Bertram Lee, and I am Counsel for Media and Technology at The Leadership Conference on Civil and Human Rights, a coalition charged by its diverse membership of more than 220 national organizations to promote and protect the civil and human rights of all persons in the United States. Thank you for the opportunity to testify about the use of facial recognition technology by law enforcement.
Founded in 1950 by A. Philip Randolph, Arnold Aronson, and Roy Wilkins, The Leadership Conference works in support of policies that further the goal of equality under the law through legislative advocacy and public education. The Leadership Conference provides a powerful unified voice for the many constituencies of our coalition: persons of color, women, children, people with disabilities, LGBTQ people, older Americans, working people, immigrants, people of faith, civil libertarians, and human rights advocates. Given the breadth of our coalition, The Leadership Conference is uniquely positioned to address many of the most pressing issues affecting the nation today, including the need to transform the criminal-legal system in America and ensure that technology is designed and used in ways that respect civil rights.
The American criminal-legal system was built on a structure designed to capture humans escaping bondage, relying on community surveillance to not only capture, but also punish those seeking freedom. Facial recognition technology, and the biases these systems contain, only serve to continue this history of discrimination and the disparate treatment of people of color in a new era. The expansion of surveillance technologies like facial recognition does not make communities of color safer; instead, it does the opposite. Use of facial recognition in law enforcement only further exposes people of color to the systemic racism of the criminal-legal system, reinforcing racist narratives that Black communities and other people of color are to be surveilled and over-policed at every turn. That is why this hearing is so important, and we commend the subcommittee for much-needed and welcome attention on problematic uses of technology within our policing and criminal-legal system.
The Leadership Conference has spoken out against law enforcement use of facial recognition since 2016, highlighting the inherent bias of these tools and their disparate impact on marginalized communities that were already over-policed. Last month, The Leadership Conference, along with Upturn and New America’s Open Technology Institute, released “Civil Rights Concerns Regarding Law Enforcement Use of Face Recognition Technology,” which was signed by 40 advocacy organizations. The statement, which we request be entered into the record of this hearing, highlighted six of the most pressing civil rights concerns that advocacy organizations have with facial recognition technology:
- Regardless of technical accuracy, law enforcement use of face recognition systems could exacerbate the harms of policing in communities that are already targeted by the police.
- Law enforcement use of face recognition threatens individual and community privacy by allowing invasive and persistent tracking and targeting.
- Law enforcement use of face recognition can chill First Amendment-protected activities.
- Law enforcement use of face recognition can easily violate due process rights and otherwise infringe upon procedural justice.
- Face recognition systems used by law enforcement often rely on faceprints that have been obtained without consent.
- In addition to racial bias in how law enforcement use face recognition, the technology itself poses disproportionate risks of misidentification for Black, Asian, and Indigenous people.
Additionally, The Leadership Conference, the ACLU, and more than 45 advocacy organizations wrote a letter to the Biden administration calling for a moratorium on the government use of facial recognition technology.
We urge policymakers to act now to protect the public from face recognition technology. The most effective way to do that is by implementing a ban or moratorium on law enforcement use of face recognition.
Facial Recognition Technologies Are Inherently Biased
Facial recognition technology disproportionately misidentifies and misclassifies people of color, trans people, women, and other marginalized groups, posing threats to communities’ health, safety, and well-being. The reasons for these biases vary: In some cases, the cause of bias is the database an image is being searched against. In others, it is due to historical bias that is built into the algorithm through the images used to develop the technology. Some experts have even blamed inconsistent lighting. The end result, however, is the same: As the Center for Strategic and International Studies highlighted, “[e]ven if an algorithm shows no difference in its accuracy between demographics, its use could still result in a disparate impact if certain groups are over-represented in databases. African-American males, for example, are disproportionately represented in the mugshot databases many law enforcement facial recognition systems use for matching.” In at least three cases that are publicly known, police have relied on erroneous face recognition identifications to make wrongful arrests of Black men. Robert Williams, who is also testifying before the subcommittee today, was one such individual and he will speak directly to the dangerous nature of this technology in the hands of law enforcement.
Studies of face recognition algorithms have found that commercial algorithms available for purchase and use by government entities around the country have inequitable error rates across a number of demographics, including race, sex, and age. The National Institute of Standards and Technology’s Face Recognition Vendor Test found significant variation in both false positives (instances where the algorithm said there was a match when no such match existed), and false negative error rates (instances where the algorithm said there was not a match but there was one), across race, sex, and age. The highest false positives for U.S. law enforcement mugshots were among Black, Asian, and Indigenous people. In a comparison of match rates by country of origin, photos of people from East African countries had false positive rates 100 times higher than the baseline rate.
These findings are consistent with previous research conducted by Black scholars Joy Buolamwini, Deb Raji, and Timnit Gebru that concludes that algorithms could be racist. Their research found that some facial analysis algorithms misclassified Black women nearly 35 percent of the time, while nearly always getting it right for White men. A subsequent study by Buolamwini and Raji at the Massachusetts Institute of Technology confirmed these problems persisted with Amazon’s facial recognition technology.
Bias in facial recognition technology is hardly surprising given that the technology, data, and history of the tools, standards, and practices are based in racist systems. Cameras at their inception were not meant to portray the faces of Black and Brown people the same way they portrayed White people. These tools were not designed with people of color in mind, calling into question whether we can reasonably rely on a camera to tell us the “truth.”
Yet rather than investigating these tools and practices, law enforcement instead tests them on the bodies of people of color, further exacerbating the harms created by the criminal-legal system itself. Some families never recover, financially or psychologically, from a close relative’s encounter with the criminal-legal system. By continuing to use facial recognition technologies that are deeply biased against the very same communities that are targeted by surveillance and policing, law enforcement threatens to further entrench the current criminal-legal apartheid system of justice.
Improving Accuracy Will Not Mitigate the Disparate Impact of Facial Recognition
Improvements in the accuracy of facial recognition technology will not address the fundamental issue that facial recognition technology expands the scope and power of law enforcement, an institution that has a long and documented history of racial discrimination and racial violence that continues to this day. In the context of policing, facial recognition is always dangerous — no matter its accuracy. Throughout our nation’s history, law enforcement has used surveillance to silence dissent and to maintain white supremacy, as evidenced from slave patrols to the FBI’s COINTELPRO program. Facial recognition and other modern surveillance technologies promise to continue a history that has shown itself to be incompatible with the freedoms and rights of Black and Brown communities.
Even if the technology worked perfectly, it would still facilitate the mass tracking of people’s movements in public spaces, a point highlighted in the recent revelation that the Los Angeles Police Department was spying on Black Lives Matter protestors using Amazon Ring cameras. When combined with existing networks of surveillance cameras dotting our urban and suburban landscapes, facial recognition algorithms could enable governments to track the public movements, habits, and associations of all people, at all times — merely with the push of a button. In the year 2021, the persistent tracking of all people in America’s public spaces with facial recognition technology is no longer relegated to the realm of science fiction; it is now our reality. As Congress considers how our nation’s systems of policing and justice disproportionately harm communities of color, it must decelerate law enforcement’s ability to wield such a powerful technology when history and recent abuses provide little reason to think it will be used responsibly.
Facial Recognition Technologies Only Expand the Current Police State
In a disturbing development, federal, state, and local law enforcement are increasingly using facial recognition technology for routine investigations, and facial recognition networks have grown to include half of all American adults. According to a report from the Georgetown Center on Privacy and Technology, more than 133 million American adults are included in facial recognition networks across the country, and at least one in four state or local police departments can run facial recognition searches through their own network or the network of another agency. For example, the Sheriff’s Office in Pinellas County, Florida, alone estimates that more than 8,000 facial recognition searches are conducted on its system every month. Similarly, the FBI is expanding the reach of its face recognition unit (FACE Services), through which it can access more than 30 million photos in its own database and scan the driver’s license photos of 16 states. From August 2011 through December 2015, the FBI requested nearly 215,000 searches of external partners’ databases alone. Last Congress, the FBI confirmed that it has the ability to match against more than 640 million photos.
Though facial recognition technology is rapidly merging with everyday police activities in nearly every jurisdiction in America, the safeguards to ensure this technology is being used fairly and responsibly are virtually nonexistent. The FBI and other federal law enforcement agencies have been using facial recognition technology for years and without explicit authorization or guidance from Congress. That must be corrected, but law enforcement use is not the only concern. Facial recognition technology is being deployed in schools, workplaces, public housing developments, and health care facilities, in many cases to devastating effect.
For instance, in schools, Black and Brown children are disproportionately disciplined compared to their White counterparts for the same behavior. Introducing facial recognition technology into an already discriminatory environment compounds the negative and disparate impacts on Black and Brown children by increasing their interactions with school officials and greasing the school-to-prison pipeline. There is already evidence that law enforcement is using facial recognition technology not only on major cases but minor ones as well. The use of facial recognition in minor or misdemeanor investigations only adds to the disparate policing that people of color and LGBTQ people experience on a daily basis. LGBTQ youth are particularly targeted by law enforcement and incarcerated at much higher rates compared to heterosexual/cisgender youth.
Law enforcement is expanding its surveillance powers at a time when many Americans are becoming increasingly cognizant of police abuses, as evidenced by the nationwide protests for police accountability over the past year. If the government can track everyone who goes to a place of worship, attends a political rally, or seeks health care for reproductive health or substance use, we lose our freedom to pray to the god we want, speak our minds, freely criticize the government, and access health care in private. Americans should feel free to do these things without fear that government officials are secretly tracking and cataloging their every move.
Over the past two years, activists have pushed local leaders to pass bans on government use of facial recognition in at least 20 municipalities across the country, including Boston, Mass., San Francisco, Calif., and Jackson, Miss. States including Vermont, California, Maine, Virginia, and New York have passed legislation halting some government use of the technology, particularly in light of the substantial racial justice and constitutional concerns detailed above. Unfortunately, however, local and state governments are largely powerless to control the way the FBI and other federal agencies use this technology in communities. As more jurisdictions place a moratorium on facial recognition technology, Congress needs to have a larger conversation about not only federal use of facial recognition technology, but also the impact that these tools have in expanding the ever-more present police state.
Evidence that facial recognition technology impedes civil and human rights has never been clearer. Even major technology companies have acknowledged the harms of law enforcement use of facial recognition. In June 2020, as institutions across the country faced a reckoning over racial equity, IBM, Microsoft, and Amazon announced they would halt the sales of their facial recognition technology to law enforcement.
Given the serious and undeniable threat that facial recognition technology poses, especially to Black and Brown people and other marginalized communities, we urge Congress to act swiftly to ensure that law enforcement does not have another tool to disproportionately discriminate against our most vulnerable communities. We look forward to working closely with this subcommittee to address the serious concerns with this technology.
 Connie Hassett-Walker, The racist roots of American policing: From slave patrols to traffic stops, The Chicago Reporter (June 7, 2019), https://www.chicagoreporter.com/the-racist-roots-of-american-policing-from-slave-patrols-to-traffic-stops/.
 Frances Adams-O’Brien, Is There Empirical Evidence That Surveillance Cameras Reduce Crime?, Municipal Technical Advisory Service Institute for Public Service (September 26, 2016),
 The Leadership Conference on Civil and Human Rights, Letter to Principal Deputy Assistant Attorney General Vanita Gupta (October 18, 2016), https://www.aclu.org/sites/default/files/field_document/coalition_letter_to_doj_crt_re_face_recognition_10-18-2016_1.pdf.; The Leadership Conference on Civil and Human Rights, Letter to Committee on Oversight and Reform about Protests (June 30, 2020), https://civilrights.org/resource/letter-to-committee-on-oversight-and-reform-about-protests/.; The Leadership Conference on Civil and Human Rights, Comments in Opposition to Proposed Rulemaking: Collection of Biometric Data from Aliens upon Entry to and Departure from the United States (December 21, 2020), https://civilrights.org/resource/comments-in-opposition-to-proposed-rulemaking-collection-of-biometric-data-from-aliens-upon-entry-to-and-departure-from-the-united-states/.; The Leadership Conference on Civil and Human Rights, Civil Rights Groups Urge Strong Ethical Review of Axon’s Police Technology (April 26, 2018), https://civilrights.org/2018/04/26/civil-rights-groups-urge-strong-ethical-review-axons-police-technology/.
 The Leadership Conference on Civil and Human Rights et al., Civil Rights Concerns Regarding the Law Enforcement Use of Facial Recognition Technology (June 3, 2021), https://newamericadotorg.s3.amazonaws.com/documents/FINAL_Civil_Rights_Statement_of_Concerns_LE_Use_of_FRT_June_2021.pdf.
American Civil Liberties Union, Facial Recognition Technology Letter to President Biden, February 26, 2021, https://www.aclu.org/sites/default/files/field_document/02.16.2021_coalition_letter_requesting_federal_moratorium_on_facial_recognition.pdf.
 Amrita Khalid, Facial recognition AI can’t identify trans and non-binary people, Quartz (October 26, 2019), https://qz.com/1726806/facial-recognition-ai-from-amazon-microsoft-and-ibm-misidentifies-trans-and-non-binary-people/.
 William Crumpler, The Problem of Bias in Facial Recognition, The Center for Strategic & International Studies (May 1, 2020), https://www.csis.org/blogs/technology-policy-blog/problem-bias-facial-recognition.
 Patrick Grother, Mei Ngan, and Kayee Hanaoka, NISTIR 8280: Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects, National Institute of Standards and Technology (December 2019), https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.
 Joy Buolamwini, Timnit Gebru,Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77-91, 2018, http://proceedings.mlr.press/v81/buolamwini18a.html.
 Natasha Singer, Amazon Is Pushing Facial Technology That a Study Says Could Be Biased, The New York Times (January 24, 2019), https://www.nytimes.com/2019/01/24/technology/amazon-facial-technology-study.html.
 Sarah Lewis, The Racial Bias Built Into Photography, The New York Times (April 25, 2019), https://www.nytimes.com/2019/04/25/lens/sarah-lewis-racial-bias-photography.html.
 Incarceration’s Impact on Kids and Families, The Vera Institute for Justice (2016), http://humantollofjail.vera.org/the-family-jail-cycle/.
 Devlin Barrett, FBI pressured to answer for domestic-spying program tied to Black Panther Fred Hampton’s killing in 1969,The Washington Post (May 4, 2021), https://wapo.st/3hN2HTD.
 Matthew Guariglia and David Maass, LAPD Requested Ring Footage of Black Lives Matter Protests, Electronic Frontier Foundation (February 26, 2021),
 Facial Recognition Technology (Part 1): Its Impact on Our Civil Rights and Liberties, U.S. House of Representatives Committee on Oversight and Reform, 115th Congress (2019), Statement of Clare Garvie, Senior Associate, Center on Privacy & Technology at Georgetown Law, https://www.congress.gov/116/meeting/house/109521/witnesses/HHRG-116-GO00-Wstate-GarvieC-20190522.pdf; Clare Garvie, Alvaro M. Bedoya, & Jonathan Frankle, The Perpetual Line-Up: Unregulated Police Face Recognition in America, Center on Privacy & Technology at Georgetown Law (Oct. 18, 2016) http://www.perpetuallineup.org.
 Jerry Iannelli, Miami-Dade Cops Want Permanent Access to Controversial Facial Recognition Database, Miami New Times (November 8, 2019), https://www.miaminewtimes.com/news/miami-dade-police-department-wants-to-use-pinellas-county-faces-facial-recognition-database-11313634.
 Government Accountability Office, Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy (May 2016), http://www.gao.gov/assets/680/677098.pdf.
 Russell Brandom, Facebook, Twitter, and Instagram Surveillance Tool Was Used to Arrest Baltimore Protestors, The Verge (October 11, 2016), http://www.theverge.com/2016/10/11/13243890/facebook-twitter- instagram-police-surveillance-geofeedia-api.; Neema Singh Guliani, The FBI Has Access to Over 640 Million Photos of Us Through Its Facial Recognition Database, American Civil Liberties Union (June 7, 2019), https://www.aclu.org/blog/privacy-technology/surveillance-technologies/fbi-has-access-over-640-million-p hotos-us-through.
 Supra note 13.
 Alfred, Ng. This Manual for a Popular Facial Recognition Tool Shows Just How Much the Software Tracks People, The Markup (July 6, 2021), https://themarkup.org/privacy/2021/07/06/this-manual-for-a-popular-facial-recognition-tool-shows-just-how-much-the-software-tracks-people.
 Brett Arends, Black children are more likely to be disciplined than white kids for the same behavior, Marketwatch (October 16, 2019), https://www.marketwatch.com/story/black-children-are-more-likely-to-be-disciplined-than-white-kids-for-the-same-behavior-2019-10-16.
 Alfred Ng, Police are using facial recognition for minor crimes because they can, CNET (October 24, 2021), https://www.cnet.com/tech/services-and-software/police-are-using-facial-recognition-for-minor-crimes-because-they-can/.
 See, e.g., Movement Advancement Project et al., Unjust: LGBTQ Youth Incarcerated in the Juvenile Justice System (June 2017), https://www.lgbtmap.org/criminal-justice-youth-detention.
 Julie Carr Smyth, States Push Back Against Use of Facial Recognition by Police, Associated Press (May 5, 2021), https://apnews.com/article/race-and-ethnicity-health-coronavirus-pandemic-business-technology-e4266250f7e2d691d4d664735c2c6bc0.