Racial Disparities in Facial Recognition Technology Use in Policing: An Interview with Tawana Petty
By Mariah Wildgen
Last week, the U.S. Commission on Civil Rights acknowledged the need for federal safeguards to prevent harms and ensure equity in the use of facial recognition technology, especially regarding its use by law enforcement.
In this discussion, Tawana Petty, a community organizer based in Detroit, Michigan, examines the real-world impacts of the use of facial recognition technology in policing. She illustrates the potential and actual harms to her community in Detroit and around the country, while laying out an alternative vision for community safety.
Mariah: How does facial recognition technology and surveillance impact you and the people directly in your community in Detroit?
Tawana: In Detroit, we have experienced widely known racial disparities from police use of facial recognition technology and other surveillance technologies. We hold nearly half of the United States’ known misidentification cases by police using the technology. We also have a recent case where a resident was misidentified and had their car impounded for three weeks because of license plate reader data.
It’s common sense to want to live in a safe and well-resourced city. However, conflating safety with surveillance means that our human dignity is always at risk. You’re living under a perpetual line-up hoping that you aren’t the next Black person to be picked up for a crime you didn’t commit.
Our city’s Project Green Light program is being held up as a model for other cities across the United States and in Canada. This should be cause for concern for residents living in those places.
Mariah: How did you become involved in community organizing around facial recognition technology?
Tawana: I started to think about digital consent and surveillance somewhere around 2011 or 2012, when I began to study more about the history of programs like COINTELPRO and their impact on social justice leaders and organizers. But it wasn’t until 2015, when I was working with Our Data Bodies doing research on the impact of data extraction on marginalized communities, that I began to focus on this work more consistently. While I was learning through our research about the data and digital trade-offs that community members were experiencing while seeking quality of life support, I started to hear about Project Green Light. At first, I filed it away in my mind as something to just keep an eye on. They were speaking about these police-monitored flashing green lights that were going to be at about eight or nine gas stations across the city that stayed open late. I had concerns, but it didn’t seem like something that was going to proliferate the way that it did. In hindsight, that was naïve of me to think. History is an excellent educator of things to come.
Once I realized that the surveillance ball was rolling swiftly across the city, I started to organize and educate community members and fellow social justice organizers about what I was learning from organizations like Fight for the Future and Georgetown’s Center on Privacy and Technology. I have been committed to this work pretty much non-stop since then.
There have been some small policy wins, but unfortunately, we are far more surveilled in Detroit than we were when I started to do this work.
Mariah: What would real public safety in your community look like?
Tawana: Real public safety comes from community members being treated as fully human. If a neighborhood is struggling with food insecurity, housing insecurity, water insecurity, education insecurity, job insecurity, and a lack of mental health support, the solutions for transforming that neighborhood’s conditions seem obvious. However, what I see consistently are reactionary policies that respond to quality-of-life crimes that tend to be more heightened in neighborhoods with those struggles being met with surveillance technologies. If a puppy was trapped in a kennel with no food or water and became aggressive because of hunger and abandonment, you wouldn’t hover a drone over it to watch it suffer until you can criminalize it for misbehaving — you would give it food, water, and some tender loving care. All living beings deserve similar responsive solutions: love, empathy, and the tools and resources we need to survive and thrive.
Mariah: Who do you feel should be held accountable for harms created by facial recognition and other surveillance technology?
Tawana: Developers, deployers, designers, and procurers, including law enforcement agencies and government agencies. I also think computer scientists and researchers must take clear and stronger stances regarding the societal impacts that their innovations have on communities. The biases that we must account for are not just statistical and computational — we must also account for human cognitive biases. Anti-Blackness has played a significant role in the proliferation of algorithmic harms on predominantly Black communities.
Mariah: What can other local and national advocates do to fight back against discriminatory facial recognition technology?
Tawana: I would like to see other local and national advocates push for the technology banned from law enforcement and government use. Several predominantly white cities have had some success with this. However, I have come to the reluctant and unfortunate conclusion that this is not a likely outcome for most predominantly Black cities, especially ones struggling with quality-of-life crime. New Orleans organizers had some success for a while, but their progress was reversed when a lot of racial justice progress was rolled back after empathy over George Floyd’s murder by police seemed to wane across government institutions. Jackson, Mississippi, which is about 50 percent Black and 45 percent white, appears to still have its ban in place.
In lieu of a ban, I would argue that a moratorium on its use should be implemented until racial disparities can be eradicated. That would take a systemic rooting out of racial discrimination at every stage of research, design, development, and use.
Mariah Wildgen is a senior strategic communications manager at The Leadership Conference and its Center for Civil Rights and Technology.