Data Privacy and AI Safeguards Are Essential to Protect Civil Rights
By Frank Torres
Today, the Center for Civil Rights and Technology released dual findings that illustrate both why we need and how to achieve comprehensive data privacy and AI safeguards that center our civil rights. The first, State Data Privacy Laws & Civil Rights Protections, is an interactive survey of states showing that 132.59 people of color lack protection against data-based discrimination, including the use of peoples’ personal information in AI. In fact, our survey found that only two states’ privacy laws include meaningful civil rights protections. Congress has also failed to pass comprehensive federal data privacy laws.
The second, Snapshot: Civil Rights, AI, and Privacy, is a legislative brief that provides a readout of where things stand in implementing AI and data privacy safeguards across the federal government. The Snapshot shares collective insights raised during a convening the Center held in May 2024 with other civil society groups about the most pressing issues within the AI and privacy debates. It identifies critical civil rights protections for AI and data privacy, and it provides a portrait of civil society views on the current actions by Congress and the administration to protect people’s rights when it comes to the collection of data and use of technology like AI.
Why are data privacy and AI safeguards important? They are essential to prevent individuals from harm, including through biased decisions. Imagine entering a bank to get a loan: You’ve got good credit and you complete the application — only to get denied. Or you don’t get the interview for the job you applied for, even when you tick every box under experience. Imagine you go to the doctor because you’re feeling sick, only to be sent home with medicine that doesn’t seem to relieve your illness. Imagine the police showing up at your door to arrest you for shoplifting even though you never visited the store in question. In each of those cases, biased data and AI may be the cause.
These are not imaginary scenarios:
- In banks, algorithms used by some financial institutions have been found to be 40-80 percent more likely to deny loans to borrowers of color. This bias stems from historical lending data that disproportionately show minorities being denied loans.
- In hiring, Amazon’s experimental AI recruiting tool was found to downgrade resumes from candidates who are women. The tool was trained on resumes submitted over a 10-year period, which were predominantly from men, leading the AI to favor male candidates.
- In health care, an algorithm used in U.S. hospitals to predict which patients would need extra medical care was found to favor white patients over Black patients. This bias arose because the algorithm was trained on historical data that reflected existing disparities in health care access.
- In the criminal-legal system, the COMPAS algorithm, used in the United States to predict the likelihood of a defendant reoffending, was found to be biased against Black defendants. It often incorrectly flagged them as higher risk compared to white defendants. And in case after case, facial recognition has resulted in wrongful arrests because it falsely identified innocent people as criminals.
These outcomes are not random — they happen because of bad decisions driven by faulty AI. Those decisions wrongfully harmed real people. And it could happen to you. But this doesn’t have to be our reality. We can address biases in AI and ensure the technology treats everyone fairly.
We know what we have to do in order to achieve a more equitable AI and data future. What we’re asking for is simple: Ensure that the AI works. Ensure that it is tested before it is put into use. Provide people the right to know if AI is being used and the ability to challenge AI-driven decisions. Prohibit the use of AI systems that are unfair, biased, or discriminatory. Systems that are unreliable and untrustworthy should not be used to make life-impacting decisions. It is not too much to ask. Yet Congress and tech developers and deployers fail to embrace comprehensive safeguards, despite promises to make AI safer for all.
There is another crucial element related to data and AI that must not be overlooked: the impact on working people. Importantly, the Snapshot notes concerns about whether AI will displace, discriminate against, or exploit workers. To address these potential harms, AI policies must keep workers in mind, ensure a just transition for workers who will be displaced by AI, provide education and retraining, prohibit AI tools that have a disparate impact, and encourage multilingual products and services.
The lack of comprehensive privacy protections in the United States and the rapid advancement of AI technology highlights the urgent need for robust safeguards. While the current patchwork of privacy laws leaves significant gaps, recent actions by the Biden-Harris administration — with its implementation of the AI executive order — demonstrate a commitment to addressing these challenges. It is possible to protect and promote the civil rights of people across the United States in the digital age. That’s real innovation.
As we move forward, it is essential for policymakers, technology developers and deployers, and civil society to work together to create a safer and more equitable tech and AI future that upholds our nation’s values. Only through collaboration can we ensure that technological advancements benefit us all.
Frank Torres is the civil rights and technology fellow at the Center for Civil Rights and Technology at The Leadership Conference on Civil and Human Rights. In 2020, Frank retired from Microsoft after nearly 20 years at the company, where he was the director of public policy in Microsoft’s Office of Responsible AI, the team leading the company’s responsible development and deployment of AI.