Introduction
Scientific breakthroughs in machine learning, natural language processing, computer vision, and other advanced techniques have taken artificial intelligence out of the realm of science fiction and directly into our lives. AI systems are being used today to make decisions about us. They screen job applications, evaluate creditworthiness for home loans, help decide who can rent an apartment, flag people for suspicion of benefits fraud, target and surveil immigrant communities, make recommendations impacting healthcare, and influence who goes to jail through bail and sentencing recommendations. Investors are pouring billions of dollars into the technology on the promise that it can do even more.[1] As AI’s role grows in decisions that affect our rights and opportunities, it is imperative that the technology be fair and nondiscriminatory.
Ideally, the use of AI would produce more fairness by minimizing the influence of human bias and artificial barriers to opportunity. But like other technological advances before it, AI is not neutral.[2] An AI system is influenced by how it was created. It may be trained on biased data. The design team may be from a narrow demographic and reflect that team’s lived experiences and biases. Designed to recognize and learn from patterns, an AI system can deepen disadvantage by applying stereotyping and replicating the effects of past discrimination at unprecedented scale and speed. Or, an algorithm might overweight unnecessary factors that correlate with identity and thereby introduce new forms of discrimination. The result is that the use of AI can produce biased predictions, bad decisions, and harmful outcomes.
Systems that disadvantage people based on arbitrary and irrelevant factors rob us of the chance to succeed on our own merit. The discrimination can ripple through communities, denying opportunities based on personal traits such as race, sex, sexual orientation, gender identity, religion, age, disability, or other illegal bases (known as protected characteristics).
Civil rights laws that prohibit disparate treatment—usually involving intentional discrimination—are inadequate to combat such harms. After all, most AI systems are not deliberately designed to discriminate based on protected characteristics. Moreover, datasets and model designs are often proprietary corporate secrets, or so complex that they are effectively black boxes.[3] This can make it exceedingly difficult, if not impossible, to discover when algorithms do treat people differently based on their identity.
Fortunately, there is another legal doctrine that has protected civil rights for over half a century: disparate impact liability. This doctrine tests for invisible barriers[4] to equal opportunity—hidden unfairness based on race, sex, or another irrelevant factor that may be baked into a decisionmaking system. Under disparate impact law, an apparently neutral system that in practice hurts people with a shared protected characteristic is unlawful unless (a) it serves a substantial and important interest, and (b) there is no less discriminatory way to design the system. This doctrine allows victims of algorithmic discrimination to challenge unfair AI systems and seek justice without having to prove the creators’ intent to discriminate. It also creates the right incentives for AI developers to test for discriminatory results and adjust their training data and model architecture to make their systems fair.[5] (Disparate impact, as explained below, is entirely different from affirmative action.)
Put more simply, disparate impact liability helps make sure AI-based decisionmaking systems identify qualified people—like strong job applicants and good credit risks—rather than allowing outputs to be skewed unfairly because of race and other traits.
Unfortunately, disparate impact liability is currently under attack. In April 2025, President Donald Trump announced his administration would “eliminate the use of disparate-impact liability in all contexts to the maximum degree possible.”[6] He ordered agencies to repeal federal disparate impact regulations—which the Justice Department promptly did, upending over 50 years of law without taking public comment.[7] Trump is also pushing Congress to preempt state laws regulating AI, including state disparate impact statutes. Having failed thus far, he issued an executive order in December 2025 directing federal agencies to argue preemption under existing law based on far-fetched legal theories.[8]
This report explains why disparate impact is needed now more than ever and why undermining the doctrine is wrong. In brief, the article: (1) discusses the origin and nature of disparate impact liability; (2) explains how bias materializes in automated systems and how disparate impact can remedy and prevent discriminatory AI; and (3) demonstrates that President Trump’s attempt to eliminate disparate impact rests on serious legal errors. Notwithstanding the federal government’s abandonment of the doctrine, disparate impact liability remains the law. Robust state and private enforcement will help ensure that technological progress does not come at the expense of equality and that everyone can benefit from the promise of AI.
[1] Who Will Pay for the AI Boom?, The Economist (July 31, 2025), https://www.economist.com/business/2025/07/31/who-will-pay-for-the-trillion-dollar-ai-boom.
[2] See Reva Schwartz et al., Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, Special Publication, National Institute of Standards and Technology, at ii (2022), https://doi.org/10.6028/NIST.SP.1270 (“Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system.”).
[3] Matthew Kosinski, IBM, What Is Black Box AI? (Oct. 29, 2024), https://www.ibm.com/think/topics/black-box-ai (defining black box AI as an AI in which “[u]sers can see the system’s inputs and outputs, but they can’t see what happens within the AI tool to produce those outputs”).
[4] ReNika Moore, Sept. 16, 2025, in conversation with Author.
[5] See Chiraag Bains, The Legal Doctrine that Will Be Key to Preventing AI Discrimination, Brookings (Sept. 13, 2024), https://www.brookings.edu/articles/the-legal-doctrine-that-will-be-key-to-preventing-ai-discrimination.
[6] President Donald J. Trump, Executive Order 14281, Restoring Equality of Opportunity and Meritocracy, 90 Fed. Reg. 17537 (Apr. 23, 2025), https://www.federalregister.gov/documents/2025/04/28/2025-07378/restoring-equality-of-opportunity-and-meritocracy.
[7]Id.; Final Rule, Rescinding Portions of Department of Justice Title VI Regulations to Conform More Closely With the Statutory Text and to Implement Executive Order 14281, 90 Fed. Reg. (Dec. 10, 2025), https://www.govinfo.gov/content/pkg/FR-2025-12-10/pdf/2025-22448.pdf.
[8] President Donald J. Trump, Executive Order 14365, Ensuring a National Policy Framework for Artificial Intelligence 90 Fed. Reg. 58499 (Dec. 11, 2025), https://www.federalregister.gov/public-inspection/2025-23092/artificial-intelligence-efforts-to-ensure-national-policy-framework-eo-14365. See Charlie Bullock, Legal Obstacles to Implementation of the AI Executive Order (Dec. 2025), https://law-ai.org/legal-obstacles-to-implementation-of-the-ai-executive-order.