Testimony of K.J. Bagchi Before the U.S. Commission on Civil Rights on the Use of Facial Recognition Technologies

View a PDF of the testimony here.

STATEMENT OF KOUSTUBH “K.J.” BAGCHI
VP, CENTER FOR CIVIL RIGHTS AND TECHNOLOGY
THE LEADERSHIP CONFERENCE ON CIVIL AND HUMAN RIGHTS

U.S. COMMISSION ON CIVIL RIGHTS

BRIEFING ON THE USE OF FACIAL RECOGNITION TECHNOLOGIES

Friday, March 8, 2024

On behalf of The Leadership Conference on Civil and Human Rights (The Leadership Conference) and its Center for Civil Rights and Technology, we are providing this written testimony in advance of the U.S. Commission on Civil Rights briefing on the government’s use of Facial Recognition Technology (FRT). We understand that the commission is seeking information for its annual statutory enforcement report about the civil rights concerns related to how FRT is developed and utilized by the U.S. Department of Justice (DOJ), U.S. Department of Homeland Security (DHS), and U.S. Department of Housing and Urban Development (HUD).

The Leadership Conference is a coalition charged by its diverse membership of more than 240 national organizations to promote and protect the civil and human rights of all persons in the United States. Through its membership, its Center for Civil Rights and Technology, and its Media/Telecommunications Task Force, The Leadership Conference works to ensure that civil and human rights, equal opportunity, and democratic participation are at the center of communication, public education, and technology policy debates. We have been actively engaged in policy development to ensure civil rights are at the center of the development and use of new technologies, especially where those technologies are rights and safety impacting.

The Leadership Conference appreciates the opportunity to raise critical concerns about the use of FRT, which continues to have serious consequences and has resulted in harm to individuals and communities of color. Those harms cut across sectors from its use by law enforcement, immigration officials, and in public housing. Simply put, the use of FRT threatens civil rights, and measures must be taken to ensure that safeguards are put in place, including identifying instances where the technology should not be used.

Technology that threatens civil rights should be banned.

The Leadership Conference has spoken out against the use of facial recognition since 2016 and has a long history of weighing in on law enforcement issues. Our testimony before Congress at a hearing on “Facial Recognition Technology: Examining its Use by Law Enforcement” further highlighted the inherent bias of FRT and its disparate impact on marginalized communities.[1] That testimony noted a report on “Civil Rights Concerns Regarding Law Enforcement Use of Facial Recognition Technology.” While focusing on law enforcement, the civil rights concerns raised by more than 40 advocacy organizations who signed onto that report not only still hold true today, but they apply to other uses of FRT, including in housing and immigration. The civil rights concerns raised are:

  1. Regardless of technical accuracy, the use of face recognition systems could exacerbate harms in underserved communities.
  2. Use of face recognition threatens individual and community privacy by allowing invasive and persistent tracking.
  3. Use of face recognition can chill First Amendment-protected activities.
  4. Use of face recognition can violate due process rights and otherwise infringe upon procedural justices.
  5. Face recognition systems often rely on faceprints that have been obtained without consent.
  6. In addition to racial bias, the technology itself poses disproportionate risks of misidentification for Black, Asian, and Indigenous people.

We agree with a group of senators, who in a recent letter recognized “that facial recognition software can be inaccurate and unreliable.”[2] The letter went on further, calling “into question federal funding for technology that violates Title IV of the Civil Rights Act which prohibits ‘discrimination under any program or activity receiving Federal financial assistance.’”

GAO report on facial recognition services finds basic measures not taken to protect civil rights.

A recent Government Accountability Office (GAO) report on facial recognition services was troublesome. It found that agencies were using FRT without taking basic measures to ensure civil rights are being protected. The GAO reported that the Departments of Homeland Security and Justice used FRT without requiring staff to take facial recognition training. Also problematic is that the GAO found that some agencies do not have policies specific to FRT to help protect people’s civil rights and civil liberties. The GAO recommended that federal law enforcement agencies, including Customs and Border Protection, take action to implement training and policies for civil liberties.

A major question to answer is how agencies are held accountable for the FRT systems they are using. Besides policies and training, it is not clear whether or how those systems are assessed or tested and how determinations are made to ensure that the FRT use does not threaten civil rights. The GAO report is another indication that safeguards are not in place, and even more problematic, there appears to be a lack of accountability when it comes to the procurement and use of technology that threatens civil rights.

Use of facial recognition technology has already caused significant rights-impacting harm.

Law Enforcement Uses and Harms

Facial recognition technology can be used by law enforcement for various purposes, including in investigations, identifying victims of crimes, and sorting out faces in photographs that are part of evidence. FRT poses risks, including privacy, accuracy, and bias. Some FRT systems have higher error rates for certain demographic groups, such as women and people of color. Further issues include lack of proper oversight, transparency, or accountability.

Bias and discrimination in FRT Systems has resulted in unwarranted and false arrests of individuals, as well as enabling persistent tracking and over-policing. Recent cases include:

Randal “Quaran” Reid (Atlanta, Georgia; arrested 11/22)[3]
Background: The police arrested Quaran Reid in Georgia while he was driving home from his mother’s house the day after Thanksgiving. The police told him that he was wanted for crimes in Louisiana. However, he had never visited Louisiana before. Nevertheless, he spent the next few days in jail. He later sued for the misuse of facial recognition technology. In 2023, he was among at least five Black plaintiffs who had filed lawsuits against law enforcement saying they were misidentified by facial recognition and wrongfully arrested.

Consequences: Quaran Reid was wrongfully arrested and sent to jail. His car was towed, the food in jail made him sick, and he missed multiple days of work. He is still traumatized from what happened to him and still thinks about that police stop every time he sees police in his rearview mirror.

Harvey Eugene Murphy, Jr. (Houston, Texas; arrested 10/23)[4]
Background: A robbery occurred at a Sunglass Hut in Houston, Texas in January 2022. Even though 61-year-old Harvey Murphy, Jr. was living in California at the time of the robbery, the police identified him as a suspect in the robbery. The police arrested him when he returned to Texas to renew his driver’s license. He was held in jail and sexually assaulted by three men. The Texas DA’s office later determined he was not involved in the robbery.

Consequences: He was arrested and held in jail where he was later sexually assaulted, from which he sustained life-long injuries. He is now suing the parent company of Sunglass Hut and Macy’s because the companies’ loss prevention departments used facial recognition to identify Murphy.

Robert Williams (Detroit, Michigan; arrested 01/20)[5]
Background: Robert Williams was wrongly arrested for theft. After a store was robbed, the cops ran the surveillance footage through their facial recognition system, turning up an incorrect match. They arrested Williams at his house, in front of his family.

Consequences: He spent 18 hours in jail and was only released after contact with the ACLU and provision of a defense attorney. His daughters continue to suffer trauma associated with his arrest.

Porcha Woodruff (Detroit, Michigan; arrested 02/23)[6]
Background: The police arrested Porcha Woodruff when she was eight months pregnant, holding her in jail for 11 hours, all based on a false identification match made with facial recognition technology. They released her to a hospital on a $100,000 bond once she started experiencing contractions. She was the third person to be falsely identified by facial recognition technology in a single police department. She is the seventh person who is recorded to have been wrongfully arrested after being identified by facial recognition tech (there may very well be more), every single one of whom was Black.

Consequences: She was arrested, held in jail for hours, and posted bail, the stress of which endangered her unborn child, leading to hospitalization. She also had to endure protracted legal proceedings to resolve her case and is now suing the city of Detroit.

Najeer Parks (Woodbridge, New Jersey; arrested 02/19)[7]
Background: Najeer Parks was arrested in February 2019 after he walked into a police station to clear his name. The police had gone to his grandmother’s house to look for him and Parks assumed it was a misunderstanding. Instead, the police cuffed him and detained him for 11 days. He was charged with aggravated assault, unlawful possession of weapons, using a fake ID, possession of marijuana, shoplifting, leaving the scene of a crime, resisting arrest, and was accused of almost hitting a police car. The only evidence that the prosecutors and judge had was a high-profile comparison from a facial recognition scan of the photo of the fake ID that was left at the crime scene.

Consequences: He spent 11 days in jail, and it took nearly a year for the charges against him to be dropped. Parks is now suing for violation of his civil rights and intentional infliction of emotional distress.

Alonzo Sawyer (Baltimore, Maryland; arrested spring/22)[8]
Background: Sawyer was arrested for assaulting a bus driver, when his wife knew that he was at home. The actual defendant, when found, was seven inches shorter and 20 years younger, points that Sawyer’s wife raised repeatedly when speaking with the police. Public defenders in Maryland allege facial recognition tech was used more than 800 times in 2022.

Consequences: At 54 years old, he was forced to spend nine days in jail. He missed family events, multiple days of work as a barber, and could not complete a construction contract.

Michael Oliver (Detroit, Michigan; arrested 05/19)[9]
Background: Michael Oliver, a 25-year-old man, was wrongfully accused of reaching into someone’s car, grabbing their phone, and throwing it. Oliver was charged with larceny after being identified by facial recognition software, despite having dozens more tattoos than the suspect used in the media for identification. However, in this instance, it should be noted that he was also misidentified by the victim of the crime.

Consequences: His jail time is unclear, but his case dragged on for months until it was dismissed by a judge. It was only afterwards that he learned he was arrested because of facial recognition technology. Prosecutors apparently now have to submit facial recognition cases to the highest-ranking person in the office for their approval, and there must be other evidence that corroborates the allegations in order to charge someone. Nevertheless, two other wrongful arrests took place in the same jurisdiction.

Housing Uses and Harms

Facial recognition technology can be used in housing for several purposes, including security, access control, and tenant screening. FRT raises concerns about privacy, accuracy, and bias, especially for communities living in public housing.

Just last year, The Washington Post reported that local officials across the United States are using surveillance cameras equipped with FRT to “punish and evict public housing residents.”[10] According to the Post, those cameras have been primarily purchased through HUD grants.[11] While the justification for the surveillance was to make housing projects safer, the Post further found that “few” of the law enforcement agencies that were contacted for the report were able to substantiate a link to lowered crime.

The Post also found that in some instances, the number of cameras in housing projects was much greater than those placed elsewhere in the community, leading to the potential for over-policing in those neighborhoods.

It is right that HUD published a notice last year banning the implementation of “automated surveillance and facial recognition technology” in public housing.

Use of FRT in Immigration

Facial recognition technology is used to verify the identity of individuals entering or exiting the country. It is also being used in the asylum application process by allowing migrants to submit their photos via a mobile application.

Just last year, immigration advocates found that a mobile app was rolled out by U.S. Customs and Border Protection (CBP) for immigrants to apply for asylum at the border. But that app, CBP One, blocked Black people from being able to file claims because of facial recognition bias in the tech. As the Guardian reported, CBP said that the app would “reduce wait times and help ensure safe orderly and streamlined processing.” Despite seemingly good intentions, the app was problematic, failing to pick up images of people with darker skin tones.[12]

The Global Entry program also uses FRT, which has resulted in concerns that travelers’ faces will be catalogued by the U.S. government along with their personal information and potentially other biometric information — with no clear rules and regulations in place about how that data can be used.

Given the threat to civil rights caused by FRT it should not be used. If it is used, safeguards must be put in place.

The administration’s commitment to ensuring civil rights are protected must also apply when FRT is considered for use in law enforcement, immigration, and housing.

Through executive orders, public statements, federal guidance, and specific policy actions, the administration has played a pivotal role in elevating civil rights in artificial intelligence and technology policy. Key achievements in this work include the administration’s AI Bill of Rights, agency commitments to update existing guidance and regulations in light of the AI Bill of Rights, Executive Order 14091, NIST’s AI Risk Management Framework, and in pending guidance expected from the Office of Management and Budget (OMB). Those frameworks must apply to the government use of FRT.

In each of those actions, the administration has also highlighted equity and civil rights in technology through its public communications, including President Biden’s April 4, 2023 remarks to the President’s Council of Advisors on Science and Technology[13] and his Wall Street Journal op-ed.[14] These efforts recognize the real harms that automated systems can cause and help to ensure that automated systems do not quietly undermine the administration’s broader commitment to advancing equity and civil rights throughout the federal government. The Biden-Harris administration’s May 2023 announcements on American innovation in AI, related federal investments, and efforts to mitigate AI harms also reflect the administration’s continued focus on this these issues.

Those commitments must apply wherever technology is used, especially where use of that technology, including FRT, can have a serious impact on an individual’s life. People who are marginalized because of race, ethnicity, religion, gender, sexual orientation, gender identity, immigrant status, or disability status often experience more severe and more frequent harms from automated systems, yet AI is being used increasingly in high-stakes settings like immigration, policing, and housing, in addition to credit and lending, education, tax audits, insurance, and hiring. As communities and businesses across the country grapple with the impact of AI, now is a critical moment.

AI must be shown to be safe and trustworthy and that it will produce intended, rights-protecting outcomes before it is put into use.

In an August 4, 2023 letter to the White House, leading civil rights and civil society organizations said that “(F)ederal agencies funding, acquiring, or using an AI system have a responsibility to ensure that the system works and is fit for purpose.” The groups further urged that the federal government should not use AI systems unless they are shown to be effective and safe. No definition of safe and effective is meaningful unless it is explicit and clear that it includes being non-discriminatory and non-violative of civil and human rights. Simply put: AI should work, and it should work for everyone.[15] The American public should be protected against existing and potential harms from AI — including threats to people’s rights, opportunities, jobs, economic well-being, and access to critical resources and services, especially where technology like FRT is used.

The marginalized communities served by the different agencies across the federal government are those that bear the most risk from the use of untested AI systems. People expect that risks associated with other regulated products will be identified, mitigated, and made known. Likewise, we expect that the technology used is safe and effective — that it works.

We appreciate the administration’s continued commitment to equity and civil rights related to the development and use of AI, including FRT. These values underpin our democracy. Prior to procuring, using, or funding powerful new technology, agencies must also ensure that the technology works. That means that the technology has had sufficient, transparent testing to ensure that it will produce intended, fair, equitable, and unbiased results and does not produce inequitable outcomes for historically disadvantaged groups. Those measures must be taken before FRT is developed, procured, funded, or used.

As The Leadership Conference on Civil and Human Rights commented[16] in response to the Office and Management and Budget’s Draft AI Guidance Memo,[17] there are important safeguards that need to be put into place. Moreover, civil rights protections must be required of all agencies, including those responsible for law enforcement, immigration, and housing.

Agencies must implement practices to manage risks from rights-impacting and safety-impacting FRT.

Just as we know of the harms biased and broken AI systems can cause, we know what can be done to identify, prevent, or mitigate those harms. The OMB memo includes concrete, measurable, and scalable actions that agencies will be required to take to ensure that AI systems, like FRT, work — and that risks are managed. In fact, these actions reflect practices already discussed in tech policy, like the EU AI Regulation, and they reflect the “responsible AI principles” adopted by many companies and industry sectors.

The OMB memo includes sound requirements that all agencies using AI systems, like FRT, should be required to implement, including:

  • AI should not be used unless it is shown to not to be biased.
  • Implementation of risk management requirements, including pre-deployment impact assessments, real world testing, independent evaluations, and ongoing monitoring.
  • Requirements for explainability and transparency.
  • Training for staff procuring or using AI.
  • A requirement to consult with affected groups.
  • The need for agencies to provide for remedies and recourse.
  • The need for agencies to consider disparate impact.

Agencies must consider the impact of AI on people with disabilities.

Agencies using FRT must ensure that AI systems used by agencies consider all members of our community. People with disabilities continue to face accessibility challenges in using AI systems, despite the executive order calling for accessibility. Agencies need to intentionally include people with disabilities by building systems that conform to accessibility standards. Agencies should also consider the impact that differences in language may have to ensure accessibility for the communities where AI systems are used.

A process for ongoing and regular public engagement, including with civil society and civil rights organizations on agencies’ use of AI, must be established. 

The public — individuals — have the most to gain or lose from the use of AI. It is critical that the public interest is represented. Agencies should be required to establish defined programs to proactively seek community input as they implement AI systems, including FRT.

Conclusion

The Leadership Conference appreciates this opportunity to testify before the commission. For the use of AI systems like FRT to be trustworthy, agencies must ensure that the risks are considered early and throughout the AI lifecycle through design, development, and deployment. Before procuring or using AI, an agency should understand its limitations, recognize its intended uses as well as potential misuses, consider how to ensure the AI works for all people, and prevent harm. If the FRT system threatens civil rights, it should be banned.

[1] Facial Recognition Technology: Examining Its Use By Law Enforcement: Hearing Before the Subcomm. on Crime, Terrorism, and Homeland Security, 117th Cong. (2021)(State of Bertram Lee)(available at:    https://civilrights.org/resource/statement-of-bertram-lee-counsel-for-media-and-technology-hearing-on-facial-recognition-technology-examining-its-uses-by-law-enforcement/).

[2] “Cardin, 17 Senate Colleagues Raise Concerns About Facial Recognition Software, Demand Better DOJ Oversight,” U.S. Senator Ben Cardin (Jan. 19, 2024)( https://www.cardin.senate.gov/press-releases/cardin-17-senate-colleagues-raise-concerns-about-facial-recognition-software-demand-better-doj-oversight/).

[3] Sudhin Thanawala, “Facial Recognition Technology Jailed a Black Man for Days. His Lawsuit Joins Others from Black Plaintiffs,” AP News (Sept. 25, 2023)(https://apnews.com/article/mistaken-arrests-facial-recognition-technology-lawsuits-b613161c56472459df683f54320d08a7).

[4] Caitlin O’Kane, “This Grandfather was Mistakenly Identified as a Sunglass Hut Robber by Facial Recognition Software. He’s Suing After He was Sexually Assaulted in Jail.,” CBS News (Jan. 24, 2024)( https://www.cbsnews.com/news/facial-recognition-mistaken-identity-sunglass-hut-robber-harvey-eugene-murphy-suing-after-sexually-assaulted-jail/).

[5] Robert Williams, “I Did Nothing Wrong. I Was Arrested Anyway.” ACLU (July 15, 2021) (https://www.aclu.org/news/privacy-technology/i-did-nothing-wrong-i-was-arrested-anyway).

[6] “Meet Porcha Woodruff, Detroit Woman Jailed While 8 Months Pregnant After False AI Facial Recognition,” Democracy Now (Aug. 9, 2023)( https://www.democracynow.org/2023/8/9/porcha_woodruff_false_facial_recognition_arrest).

[7] John General and Jon Sarlin, “A False Facial Recognition Match Sent This Innocent Black Man to Jail,” CNN (Apr. 29, 2021) (https://www.cnn.com/2021/04/29/tech/nijeer-parks-facial-recognition-police-arrest/index.html).

[8] “Alonzo Sawyer Facial Recognition Wrongful Arrest, Jailing,” AIAAIC (2022) (https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/alonzo-sawyer-facial-recognition-mistaken-arrest).

[9] Elisha Anderson, “Controversial Detroit Facial Recognition Got Him Arrested for a Crime He Didn’t Commit,” Detroit Free Press (July 10, 2020), (https://www.freep.com/story/news/local/michigan/detroit/2020/07/10/facial-recognition-detroit-michael-oliver-robert-williams/5392166002/).

[10] Douglas MacMillan, “Eyes on the Poor: Cameras, Facial Recognition Watch Over Public Housing,” Washington Post (May 16 2023) (https://www.washingtonpost.com/business/2023/05/16/surveillance-cameras-public-housing/?utm_campaign=wp_post_most&utm_medium=email&utm_source=newsletter&wpisrc=nl_most&carta-url=https%3A%2F%2Fs2.washingtonpost.com%2Fcar-ln-tr%2F3a07371%2F6463a6d05dfb5222c4bcf9c7%2F629f57979110416a155db9bb%2F8%2F74%2F6463a6d05dfb5222c4bcf9c7)

[11] Id.

[12] “Facial Recognition Bias Frustrates Black Asylum Applicants to US, Advocates Say, Migrants from Africa and Hait reportedly cannot utilize app to accept their photos, which is now required to apply for asylum,” The Guardian, 8 Feb. 2023.

[13] Remarks by President Biden in Meeting with the President’s Council of Advisors on Science and Technology, The White House (April 4, 2023), https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/04/04/remarks-by-president-biden-in-meeting-with-the-presidents-council-of-advisors-on-science-and-technology/.

[14] Joe Biden, “Republicans and Democrats, United Against Big Tech Abuses,” Wall St. Journal (Jan. 11, 2023)( https://www.wsj.com/articles/unite-against-big-tech-abuses-social-media-privacy-competition-antitrust-children-algorithm-11673439411).

[15] Letter from the Center for American Progress, The Leadership Conference on Civil and Human Rights, and The Center for Democracy and Technology to the White House, Aug. 4, 2023.

[16] Leadership Conference OMB AI Guidance Memo Comments (civilrights.org); 5 Dec. 2023.

[17] OMB Releases Implementation Guidance Following President Biden’s Executive Order on Artificial Intelligence | OMB | The White House; 1 Nov. 2023.