Reflections on Civil Rights and Our AI Future

The growing landscape of AI policy must continue to center equity and civil rights

Technology now shapes nearly every aspect of modern life. While technological progress can benefit everyone, many artificial intelligence (AI) tools can also carry tremendous risks for civil rights. Automated hiring systems increasingly determine who gets a job, while algorithms set rent, screen tenants, and make decisions about granting loans. A 2020 survey found that “55 percent of human resources leaders use predictive algorithms in hiring,” but those hiring algorithms often serve to automate the biases against women, against people with disabilities, and against African Americans that are all too common in our society. Facial recognition technology is being deployed in high-stakes settings like immigration, policing, and housing, where it often discriminates against people of color because it is less accurate at identifying non-white faces and thus draws dangerously wrong conclusions.

These risks are not theoretical. Inaccuracies in facial recognition technology used by Customs and Border Patrol systematically prevented Black asylum seekers from requesting refuge. And a flawed risk assessment algorithm disproportionately keeps people of color in federal prisons, even after the Justice Department admitted the tool has racial biases.

Rather than entrench bias and automate discrimination, technology should create opportunity, safety, and benefits for all. Recognizing that the use of technologies was far outpacing the laws and enforcement needed to protect civil rights, The Leadership Conference on Civil and Human Rights and more than a dozen public interest organizations in 2014 published Civil Rights Principles for the Era of Big Data to highlight the growing need to protect and strengthen key civil rights protections in the face of technological change.

Today, that gap is still far too wide. But thanks to a decade of work from civil rights advocates and policymakers who have led efforts to ensure AI and other automated decision-making technologies respect civil rights, we have made real progress. While more work is needed, it is worth pausing to reflect upon AI policy progress thus far, including the critical role of the civil rights community in shaping the present landscape. 

Shortly after the Civil Rights Principles for the Era of Big Data’s publication, the Obama administration’s Big Data and Privacy Working Group released their findings, which cited the principles and included recommendations to protect privacy and prevent algorithmic discrimination. The working group findings were significant, but living up to their ambitions required much more work — and the path forward was winding.

Under the Trump administration, we saw limited progress and — at times — retrenchment in the form of rules that could have increased algorithmic bias. For example, in 2019 the Department of Housing and Urban Development (HUD) proposed a rule that would have dramatically weakened its interpretation of the Fair Housing Act’s “disparate impact” standard by limiting housing providers’ liability when using algorithmic models. In late 2020, President Trump issued an executive order promoting the federal government’s use of AI, but this guidance was incomplete. The executive order failed to create appropriate oversight and inadequately addressed equity.

The civil rights community continued to sound the alarm. In 2020, The Leadership Conference and more than two dozen public interest organizations unveiled updated Civil Rights Principles for the Era of Big Data. And then in 2021, the Biden administration entered office, well positioned to take up the unfinished work of the Obama administration on ensuring technology respects civil rights and advances the public interest.

President Biden’s first executive order (Executive Order 13985) created an interagency working group on equitable data and established civil rights and equity as central considerations for federal agencies. Civil rights organizations provided vital recommendations on how to actually achieve the objectives outlined in Executive Order 13985. In July 2021, a coalition of civil rights organizations (including The Leadership Conference) sent a letter to the White House Office of Science and Technology Policy (OSTP) on centering civil rights in AI policy. This letter included three memos on financial services discrimination, housing discrimination, and hiring discrimination that were sent to relevant federal agencies. Collectively, these resources offered an array of both high-level and specific guidance for the Biden administration and federal agencies to address and remediate technology’s role in discrimination.

In October 2021, OSTP leadership penned an op-ed about the need for an AI Bill of Rights, announcing that the agency was developing principles to protect civil rights in emerging technologies. Having announced in 2021 that it would reinstate the fair housing protections gutted under the previous administration, HUD in 2023 reinstated the Obama-era disparate impact rules, reiterating that policies with discriminatory effects against a protected class are unlawful under the Fair Housing Act.

In response to the January 2021 executive order and the OSTP op-ed, The Leadership Conference and dozens of civil rights and technology justice organizations also wrote a letter to Ambassador Susan Rice, director of the Domestic Policy Council, encouraging greater federal leadership on ensuring data-driven technologies “protect civil rights, prevent unlawful discrimination, and advance equal opportunity.” The letter also urged the Biden administration to engage with community stakeholders, prioritize equity assessments, enforce anti-discrimination laws that apply to data-driven technologies, and ensure that federal R&D investments include research on anti-discrimination and equity.

Among the most pivotal developments in federal approaches to AI was the October 2022 publication of the Blueprint for an AI Bill of Rights, which set forth principles for “the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” Recognizing that the blueprint itself was a set of principles, not policy or concrete protections, the Biden administration also announced a number of promising actions and initiatives related to the Blueprint for an AI Bill of Rights, including increased enforcement by the Department of Labor of required surveillance reporting, exploration of new Federal Trade Commission rules regarding consumer surveillance, new guidance from the Department of Education regarding AI for teaching and learning, proposed rules to prohibit algorithmic discrimination in health care, and new guidance from the Department of Housing and Urban Development regarding tenant screening algorithms.

Numerous civil rights advocates praised the blueprint and corresponding agency initiatives, noting the ways in which the blueprint is built upon recommendations from the civil rights community. They also emphasized the importance of the blueprint as a starting point for the public and private sector, not a regulatory framework in and of itself.

In 2023, the administration has built on these cornerstones. In a January op-ed in the Wall Street Journal, President Biden signaled his commitment “to hold Big Tech accountable,” including a push for more algorithmic transparency and stronger limitations on the collection and use of personal data.

A major victory for civil rights advocates came soon after: President Biden’s February racial equity executive order, which directed federal agencies to strengthen equity-advancing requirements and deliver better, more equitable outcomes. In addition to providing a high-level directive for agencies to prevent and remedy discrimination and advance equity for all, this executive order also offered a clear, specific definition of “algorithmic discrimination” and required agencies to “prevent and remedy discrimination, including by protecting the public from algorithmic discrimination.” It also required agencies to ensure their development, purchase, and use of AI advances equity. This executive order is another critical step in the right direction, enshrining key principles and definitions from the Blueprint for an AI Bill of Rights into formal policy. 

Many agencies have taken encouraging initial steps, both before and after the Blueprint for an AI Bill of Rights was released. The Equal Employment Opportunity Commission (EEOC) launched its AI and Algorithmic Fairness Initiative in 2021, which includes technical assistance, research efforts, and public listening sessions. The EEOC and the Department of Justice (DOJ) published a technical assistance document in March 2022 warning employers against disability discrimination caused by AI and automated systems in employment settings. In June 2022, the Federal Trade Commission (FTC) issued a report to Congress cautioning against the use of AI as a solution to online problems, detailing the many ways in which AI tools can be invasive, biased, discriminatory, and inaccurate. In January 2023, the DOJ reached a major settlement with Meta that requires the company to change its ad delivery system in order to prevent discriminatory ads for housing. The Consumer Financial Protection Bureau and the FTC are currently seeking public comment on how tenant screening algorithms may cause discriminatory outcomes.

This work must not stop here. Principles from the Blueprint for an AI Bill of Rights, agency actions, and a mandate via racial equity executive orders are a start, but further implementation and enforcement are urgently needed. All federal agencies must do their part to ensure long-standing civil rights are protected against threats that automated systems pose. For our part, The Leadership Conference and civil rights advocates will continue to urge policymakers to ensure that technology serves the best interests of each of us. For that to happen, equity, civil rights, and the ways AI and automated systems impact the daily lives of people across the United States will need to play a role in our algorithmic future.