Disparate Impact as Uniquely Relevant in the Age of AI
A. How AI Systems Perpetuate and Amplify Discrimination
Understanding how AI systems create discriminatory outcomes is crucial for grasping why the disparate impact doctrine is essential to protect people from harm.
With appropriate safeguards, AI may be able to increase reliance on objective factors and reduce the opportunity for human bias to skew decisionmaking. For example, an AI tool that accurately measures skills might be preferable to a human being susceptible to stereotypes or preferences for applicants in their social network. AI tools might even help close opportunity gaps by directing resources to historically underserved neighborhoods and populations. Consider an underwriting algorithm that uses non-standardized, nontraditional information like cash flow data to expand access to credit,[1] or an AI tool that better predicts cardiovascular risk by analyzing diagnostic test results, health records, and activity data from smartwatches.[2] One can see how automation creates the tantalizing possibility of improving fairness.
AI systems are not inherently neutral, however. They can internalize and cause discrimination in several ways, based on the data they’re trained on and how they’re designed and deployed:
1. Biased training data
An AI system typically learns by iteratively analyzing and recognizing patterns in huge amounts of “training data”—text, images, audio, video, and other inputs that are “fed” into the system’s mathematical algorithm.[3] It then applies those learnings to make predictions, recommendations, or decisions based on likely future outcomes (predictive AI) or to generate material (generative AI) based on new inputs.[4]
- The training data may suffer from representation bias, under- or over-representing certain traits in the population the AI will be used to evaluate and thereby skewing its outputs. For example, the underrepresentation of women and people of color in images used to train certain facial recognition tools may explain researchers’ findings that the tools worked nearly perfectly in identifying lighter-skinned men but repeatedly failed to recognize darker-skinned women.[5]
- In supervised learning—a type of model training in which the AI learns from data manually tagged by human beings—labeling bias can occur when these human annotators systematically label the training data incorrectly, inconsistently, or in ways that reflect social biases. One study of crowdsourced hate speech datasets found that annotators disproportionately labeled Twitter posts in Black vernacular English as offensive or abusive. Automated content moderation models trained on those datasets “acquire and propagate” that bias, flagging Black vernacular posts in test runs at disproportionately high rates.[6]
- AI systems can also internalize stereotypes through embedding bias. Word embeddings are numerical representations of words that map how close they tend to appear to other words in an AI system’s training data. AI uses word embeddings to understand and process natural language data. Due to pervasive stereotypes, researchers have found in huge corpora of internet text that words like “computer programmer,” “pilot,” and “champion” appear closer to words like “man,” while “homemaker,” “maid,” and sexual profanities appear closer to words like “woman.”[7] One study even found that names associated with being European American are more closely associated with positive words like “loyal” and “honest” and names associated with being African American are more closely associated with negative words like “sickness” and “assault.”[8] Studies show that image classifiers can reflect similar biases—for example, identifying a man as a woman because he is standing in a kitchen.[9] Such embedding biases could cause harmful results in AI systems used to screen resumes, recommend people for promotion, assess recidivism risk, respond to chatbot queries, or even rank web search results.
- Training data may also be rife with historical bias, causing AI to replicate discrimination in past human decisions. When Amazon trained a recruiting algorithm on ten years of resumes from its predominantly male workforce, the system learned to privilege men’s resumes and penalize women’s.[10] A health care algorithm affecting millions of people consistently underestimated the medical needs of Black patients because it based its assessments on past health care spending and learned that American health care systems had historically spent “less money caring for Black patients than for White patients.”[11] Similarly, lending algorithms trained on past loan decisions could reproduce historical redlining patterns. Criminal justice algorithms trained on arrest data could penalize people and communities subjected to racial profiling.
Worse, AI systems that learn statistical patterns from biased data may also optimize for them. An AI tool might not just reproduce historical discrimination, but could supercharge it by applying it as a rule, at staggering scale and speed.[12]
2. Biased algorithmic design
Choices about model development and deployment create further opportunities for discrimination.
- Developers can introduce bias through feature selection and weighting in the model architecture. For example, if a lending algorithm is designed to weight ZIP code as a predictor of creditworthiness, it could systematically disadvantage applicants of color. Due to persistent residential segregation, ZIP code can function as a proxy for race.[13] Similarly, surnames or language preference can also be proxies for race or national origin.[14] Incorporating these proxies into an algorithm may allow the AI to make decisions based on protected characteristics without doing so expressly. Another example of biased feature selection is a pretrial detention or sentencing algorithm designed to predict recidivism risk based on arrest data. Arrest data is a better measure of police activity than criminal conduct and the likelihood to reoffend.[15]
- Another problem is deployment bias, which happens when AI systems are used in contexts different from their training environment. A hiring tool trained on one company’s data might not work fairly across different industries or regions. A tool trained on urban area data may have a high failure rate in rural areas.
- The creation of feedback loops, in which a model’s outputs influence its further training and refinement, can also produce bias. For example, studies have shown that when predictive policing models cause officers to be deployed to an area, data on the arrests they make there are fed back into the model. The predictive model can incorrectly interpret an increase in arrests as an increase in crime, and thus send more officers to the same area, “regardless of the true crime rate.”[16] This type of self-fulfilling prediction can perpetuate over-policing in low-income neighborhoods and communities of color, create the false impression that their residents are dangerous, extend mass incarceration, and leave crime unaddressed elsewhere.
- Many AI systems, particularly those using deep learning and neural networks, are black boxes in which the internal processes are proprietary and therefore secret. Some are so complex that they are opaque even to their creators. This opacity makes it impossible to detect whether a model is using protected characteristics. The problem is compounded by automation bias, our tendency to defer to automated systems, and the closely related concept of technochauvinism, the belief that tech is always superior to other solutions.[17]
B. How Disparate Impact Helps Us Combat Algorithmic Discrimination
Disparate impact doctrine helps surface and root out these sources of bias in ways that disparate treatment doctrine alone cannot.
Machines act based on the programming that human developers give them.[18] Developers, meanwhile, tout their reliance on data and math as a sign that their algorithms are objective, neutral, and trustworthy. When AI nonetheless discriminates in practice, disparate treatment law typically won’t help. Disparate impact liability, however, gives people an avenue for redress.
The potential for liability also creates the incentive to prevent discrimination before it happens. It encourages system design and testing to minimize bias before deployment, rather than after things have gone wrong. For example, disparate impact law gives a mortgage lender a reason to make sure its AI models do not include factors that overstate risk of default or understate likelihood of repayment for borrowers of a certain race. It requires employers who use AI to ensure their algorithms assess applicants for qualities relevant to the job in question. It pushes AI developers and deployers to explore less discriminatory alternatives—say, avoiding biased training data or changing the algorithm to be fairer while still serving the company’s valid purposes.
There are ample ways to make discriminatory models fairer while retaining—or improving—the accuracy of their predictions.[19]
Some training bias may be avoided with forethought, like ensuring that facial recognition software is trained on representative images. Other biases such as historical, labeling, or embedding bias may be harder to remove, but researchers have identified multiple “de-biasing” techniques to reduce their effects.[20]
Variables can be added, removed, or weighted differently in an algorithm. Developers can refine a model’s “hyperparameters,” settings that instruct the algorithm on how to learn from training data.[21] They can also use “adversarial de-biasing,” in which they build a second model that tries to find inputs that will cause the primary AI model to exhibit biased behavior, essentially acting as an automated red team identifying weaknesses.[22] The adversarial model and primary model are trained together in a competitive process that maximizes the robustness of the primary model’s predictions while minimizing unfairness.[23]
Indeed, because of a concept called “model multiplicity”—the reality that “there are almost always multiple possible models with equivalent accuracy for a given prediction problem”—a model that produces discriminatory effects can frequently be replaced by a less discriminatory version.[24] But model developers may not test for discriminatory effects or consider alternative models. Disparate impact liability gives them a reason to do so—before they cause harm and get sued.[25]
Pushing bias-prevention efforts upstream, into the model development phase, can also be profitable. Models that are accurate and fair for all people help employers identify the most qualified employees, help lenders take prudent credit risks, and help businesses attract more customers. Many AI developers themselves recognize the risk of automated systems producing or replicating bias and have established responsible AI practices aimed at mitigating it. Companies that use biased AI will be outcompeted.[26]
Finally, although most AI systems may not specifically rely on race or sex to make predictions, some do. But injured parties often lack access to the information to find that out. Disparate impact helps here, too. By giving plaintiffs who allege discriminatory effects access to the discovery process in litigation, the doctrine gives them a chance to smoke out evidence that a model in fact classifies people based on protected characteristics. They could then file intentional discrimination claims they otherwise would never have known they had.[27]
In all of these ways, disparate impact is beneficial and indeed indispensable to combating algorithmic discrimination. By eliminating arbitrary barriers, it supports genuine meritocracy.
C. Examples of Recent Legal Actions
Recent legal actions demonstrate both the necessity and efficacy of disparate impact liability in reining in algorithmic discrimination. The examples below largely involve predictive AI systems. Generative AI systems are also beginning to shape processes that affect rights and opportunities.
Employment
Job Application Screening: Derek Mobley, an African American man over 40 with anxiety and depression, applied for over 100 jobs through Workday Inc.’s AI-powered applicant screening and ranking platform. Despite his qualifications—including a finance degree from Morehouse College, a certification in server management, and work experience—Mobley was rejected in every case. Once, the rejection came within an hour of applying; another time, he was rejected for the job he was currently doing for the same company as a contractor. He sued Workday, alleging that its algorithm discriminated against him intentionally through disparate treatment and unintentionally through disparate impact based on race, age, and disability. In July 2024, a federal judge ruled that AI vendors like Workday can be held liable as agents of employers under federal anti-discrimination laws. Notably, the court dismissed Mobley’s disparate treatment claim, finding insufficient indications of discriminatory intent, but allowed his disparate impact claims to proceed.[28] In May 2025, for his age discrimination claim, the judge preliminarily certified a collective action of others who were harmed by Workday’s allegedly discriminatory AI.[29] In July 2025, the judge ruled that Workday must provide a list of employers that enabled its AI features to “score, sort, rank, or screen applicants.”[30]
The case could have nationwide consequences. Workday and its competitors provide AI-powered applicant tracking systems to thousands of companies, including over 98% of the Fortune 500, who would also have legal exposure for using biased tools.[31] These systems evaluate millions of job seekers each year.
Automated Personality Tests: The American Civil Liberties Union (ACLU) filed a complaint with the Federal Trade Commission (FTC) in 2024 over the consulting company Aon’s algorithmic personality assessments, used by major employers to screen millions of applicants. The ACLU alleged that two of Aon’s assessments have an adverse impact on autistic people and people with mental health disabilities because they test for “characteristics that are close proxies of their disabilities” and those characteristics are not job-related. Another tool, a gamified cognitive test, allegedly produces disparities based on race and disability. The ACLU argues that Aon has engaged in deceptive marketing, a legal violation within the FTC’s jurisdiction, based on the company’s claims that its products are “bias free” and “improve diversity.” The complaint also argues that the company’s “failure to take reasonable measures to assess or address the discriminatory harms” of its automated and/or AI-based assessments is an unfair act, also within the agency’s authority to address.[32]
Housing
Tenant Screening Algorithms: Louis v. SafeRent Solutions is a textbook example of how facially neutral algorithms can perpetuate systemic discrimination. Mary Louis, a Black woman with a Section 8 housing voucher, had her rental application denied by SafeRent Solutions’ algorithmic screening system despite 16 years of perfect rent payment history. The discrimination arose from a design flaw in SafeRent’s algorithm: it failed to properly account for housing vouchers in its scoring system. When voucher holders applied for housing, the algorithm treated them as having less income than they actually had available for rent, since it didn’t recognize that housing authorities would pay approximately 73% of the rent directly to landlords. This facially neutral error had a severely disparate racial impact because Black and Hispanic individuals make up a disproportionate percentage of voucher recipients.[33]
The case revealed how algorithmic discrimination compounds existing inequalities. SafeRent’s heavy reliance on credit scores also penalized Black and Hispanic applicants who have lower average scores due to historical discrimination. Property managers relied unquestioningly on the scores without understanding their flaws. The algorithm provided no meaningful avenue for appeal. SafeRent and its clients (including landlords who used the tool) had less discriminatory alternatives—such as adjusting scoring models to properly incorporate voucher income—but failed to adopt them.
As in the Workday case, a federal judge rejected SafeRent’s defense that it merely provided scores and didn’t make final rental decisions.[34] The Department of Justice (DOJ) supported Louis’s claims.[35] The case settled for almost $2.3 million, and SafeRent agreed that future scoring systems would be validated by third parties approved by the plaintiffs.[36]
Automated Criminal History Checks: In a case involving SafeRent’s predecessor CoreLogic Rental Property Solutions (which CoreLogic later spun off), plaintiffs challenged an algorithmic tenant-screening tool called CrimSAFE. Plaintiffs alleged that CrimSAFE systematically denied housing to individuals—especially African American, Latino, and disabled applicants—based on automated criminal record checks. CrimSAFE’s model combined unrelated offenses like traffic offenses and vandalism into single disqualifying categories. It conducted no individualized assessment, provided no underlying documentation, and issued “decline” decisions directly, effectively making decisions for landlords. The plaintiffs alleged unlawful disparate impact under the Fair Housing Act, based on the model’s compounding of racial disparities in arrest data and CoreLogic’s failure to try to modify its algorithm.[37] The question of whether CoreLogic is subject to the Fair Housing Act is currently pending on appeal.[38]
Chatbots Against Vouchers: In 2023 the nonprofit organization Open Communities and a renter sued Harbor Group, a property rental company with units across the country, and its AI vendor PERQ for using a chatbot that automatically rejected applicants with Housing Choice Vouchers. While the chatbot was specifically configured to reject applicants with government rental assistance, the Fair Housing Act does not prohibit discrimination based on source of income even if intentional. However, the plaintiffs alleged disparate impact based on race—which the Act does cover—because Housing Choice Voucher holders are disproportionately Black. In a settlement, Harbor Group agreed not to turn voucher holders away, and PERQ agreed its AI leasing agents would not violate the Fair Housing Act.[39]
Lending
Student Loan Underwriting: In July 2025, Earnest Operations LLC entered a $2.5 million settlement with the Massachusetts Attorney General over allegations that the company’s AI was more likely to deny loans to Black and Hispanic borrowers, or to offer them worse terms, compared to White borrowers. The state alleged disparate impact in violation of the Equal Credit Opportunity Act and state law. It also faulted the company for “failing to test its models for disparate impact and training its models based on arbitrary, discretionary human decisions.”[40] The company agreed to conduct such testing going forward as part of the settlement.[41]
Another significant matter involved Upstart Network, a financial technology company that uses AI to decide whether to make student loans and at what interest rate. The Student Borrower Protection Center (SBPC) accused the platform of racial discrimination, alleging that the model would cause a hypothetical Howard University graduate to pay almost $3,500 more for a five-year loan than a similar graduate from New York University.[42] After conversations with SBPC and the NAACP Legal Defense Fund, Upstart made some changes to its underwriting model, including dropping consideration of the average SAT and ACT scores at schools, relying instead on average post-graduation income, and adjusting inputs to ensure students at Minority Serving Institutions (MSIs) and non-MSIs are treated equally. The company also appointed the civil rights firm Relman Colfax PLLC as an independent monitor to analyze its lending model.
The monitor found that although Upstart’s model did not use proxies for race, it did approve Black applicants for loans at lower rates. The monitor also identified less discriminatory alternatives—changes to the AI’s structure that would reduce racial disparities while still serving the company’s purpose of properly assessing creditworthiness.[43]
Upstart implemented the monitor’s recommendations about how it conducts disparate impact testing and what level of disparity warrants a search for less discriminatory alternatives.[44] But it declined to adopt the monitor’s suggested changes to its model. Upstart objected that those changes would cause a drop in the model’s performance—its accuracy in predicting a borrower’s risk of defaulting on the loan or paying it off early— while the monitor assessed that the drop was “so small as to not be meaningful” when applied in the real world. The monitor argued that disparate impact law requires a company to alter its model to reduce disparities, even if there’s technically a small reduction in accuracy, where—as in this case—the altered model is likely to be equally effective at achieving the company’s business needs. The monitor believed a court would interpret federal law this way, as well.[45]
Automated Valuation Models: When people apply for a mortgage to buy a home, refinance, or borrow money against the value of their home to pay for college or startup costs for a new business, the prospective lender has the property appraised. Researchers have found evidence that homes in majority-Black and majority-Latino neighborhoods are valued lower than comparable homes in majority-White neighborhoods; indeed, in home sales, they are more likely to be appraised below the contract price, which represents what the buyer is willing to pay.[46] This evidence is consistent with reported instances of Black families receiving a significantly higher valuation after hiding their family photos and having a White friend appear on their behalf for a second appraisal.[47] A low valuation can prevent a family from purchasing a home by raising the downpayment required, cause a lender to deny refinancing, or depress a family’s ability to borrow against their home equity to pay for college or start a small business. In these ways, discriminatory appraisals suppress wealth-building and widen racial wealth gaps.
Automated valuation models (AVMs) are algorithms that use statistics and appraisals from comparable properties to estimate the value of a given home based on key data (e.g., square footage, number of bathrooms, yard size, location), without the involvement of an appraiser who visits the property. Their use can result in fairer, more accurate valuations by removing the possibility of conscious or unconscious human bias. However, AVMs can also bake in bias because they are trained on valuations made by human appraisers.[48] To combat this problem, six federal agencies issued a rule setting quality control standards for AVMs. Lenders that use these tools must take steps to ensure accuracy in valuation estimates and compliance with nondiscrimination laws such as the Equal Credit Opportunity Act and the Fair Housing Act, both of which prohibit disparate impact discrimination.[49]
[1] FinRegLab, The Use of Machine Learning for Credit Underwriting, 9-10, 12 (2021), https://finreglab.org/wp-content/uploads/2023/12/FinRegLab_2021-09-16_Research-Report_The-Use-of-Machine-Learning-for-Credit-Underwriting_Market-and-Data-Science-Context.pdf.
[2] Ariana Mihan et al., Artificial Intelligence Bias in the Prediction and Detection of Cardiovascular Disease. npj Cardiovasc Health, 1-2 (2024), https://doi.org/10.1038/s44325-024-00031-9.
[3] Cole Stryker, IBM, What Is Training Data? (May 2, 2025), https://www.ibm.com/think/topics/training-data.
[4] Rina Diane Caballar, IBM, Generative AI vs. Predictive AI: What’s the Difference? (Aug. 9, 2024), https://www.ibm.com/think/topics/generative-ai-vs-predictive-ai-whats-the-difference.
[5] Joy Buolamini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Res. 81:1-15 (2018), https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf; see also Patrick Gother et al., National Institute of Science and Technology (NIST), Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects (2019), https://doi.org/10.6028/NIST.IR.8280 (analyzing 189 facial recognition algorithms and finding elevated false-positive rates for East Asian and Black faces).
[6] Maarten Sap et al., The Risk of Racial Bias in Hate Speech Detection, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 1668-70 (2019), https://doi.org/10.18653/v1/p19-1163.
[7] See Aylin Caliskan et al., Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics, Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 156-170 (2022), https://doi.org/10.1145/3514094.3534162; Tessa E.S. Charlesworth et al., Gender Stereotypes in Natural Language: Word Embeddings Show Robust Consistency Across Child and Adult Language Corpora of More Than 65 Million Words, Psych. Science, 32(2), 218-240 (2021), https://doi.org/10.1177/0956797620963619; Tolga Bolukbasi et al., Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings (2016), https://doi.org/10.48550/arXiv.1607.06520.
[8] Aylin Caliskan et al., Semantics Derived Automatically from Language Corpora Contain Human-like Biases, SCIENCE 356.6334, 183-186 (2017), http://opus.bath.ac.uk/55288; Aylin Caliskan, Detecting and Mitigating Bias in Natural Language Processing, Brookings (May 10, 2021), https://www.brookings.edu/articles/detecting-and-mitigating-bias-in-natural-language-processing.
[9] Jieyu Zhao et al., Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-level Constraints, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, ACL, pages 2979–2989, 2980 (2017).
[10] Jeffrey Dastin, Insight – Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 11, 2018), https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG.
[11] Ziad Obermeyer et al., Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations, Science 366(6464): 447-453 (2019), https://doi.org/10.1126/science.aax2342.
[12] Reva Schwartz et al., NIST Special Publication 1270, Toward a Standard for Identifying and Managing Bias in Artificial Intelligence, 10, 33 (2022), https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf; Klas Leino et al., Feature-Wise Bias Amplification, ICLR (2019), https://arxiv.org/abs/1812.08999.
[13] Alexandra George, Thwarting Bias in AI Systems, Carnegie Mellon University Engineering News (Dec. 2018), https://engineering.cmu.edu/news-events/news/2018/12/11-datta-proxies.html.
[14] See, e.g., Nathan Kallus et al., Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination, Management Science 68(3):1959-1981 (2021), https://doi.org/10.1287/mnsc.2020.3850.
[15] See Christine Lindquist, Racial Equity Considerations When Using Recidivism as a Core Outcome in Reentry Program Evaluations, RTI International & Center for Court Innovation, at 1 (2021), https://nationalreentryresourcecenter.org/sites/default/files/inline-files/racialEquityRecidivismBrief.pdf; Sandra G. Mayson, Bias In, Bias Out, 128 Yale L. J. 2218, 2221 n.4, 2251-52 (2019), https://www.yalelawjournal.org/pdf/Mayson_p5g2tz2m.pdf.
[16] Danielle Ensign et al., Runaway Feedback Loops in Predictive Policing, Proceedings of the 1st FAccT Conference, PMLR 81:160-171 (2018), https://proceedings.mlr.press/v81/ensign18a.html.
[17] Rob Reich et al., System Error: Where Big Tech Went Wrong and How We Can Reboot, 102 (2021); Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World, 7 (2018); Meredith Broussard, More than a Glitch, 2 (2023).
[18] See Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 Harv. J.L. & Tech. 889, 906-21 (2018).
[19] See The Leadership Conference on Civil and Human Rights, The Innovation Framework: A Civil Rights Approach to AI (2025), https://innovationframework.org. See also Pauline T. Kim, Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action, 110 Cal. L. Rev. 1539, 1544, 1574-86 (2022) (discussing various de-biasing techniques and noting their lawfulness under anti-discrimination law, explaining that “many efforts to eliminate problematic features that cause bias in algorithms are more accurately characterized as non-discriminatory efforts to remove unfairness, rather than ‘reverse discrimination’”).
[20] See, e.g., Yunyi Li et al., Mitigating Label Bias via Decoupled Confident Learning, AI & HCI Workshop at 40th ICML (2023), https://doi.org/10.48550/arXiv.2307.08945; Jieyu Zhao et al., Learning Gender-Neutral Word Embeddings, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4847–4853 (2018); Michael Feldman et al., Certifying and Removing Disparate Impact, ACM SIGKDD Conf. on Knowledge Discovery & Data Mining (2015), https://doi.org/10.48550/arXiv.1412.3756.
[21] Nicholas Schmidt & Bryce Stephens, An Introduction to Artificial Intelligence and Solutions to the Problems of Algorithmic Discrimination, 73 Quarterly Report 130, 142 https://arxiv.org/pdf/1911.05755.
[22] In AI development, a “red team” is a “structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.” NIST, Computer Security Research Center, Glossary: Artificial Intelligence Red-Teaming, https://csrc.nist.gov/glossary/term/artificial_intelligence_red_teaming (last visited Jan. 6, 2026).
[23] See id.; Jenny Yang et al., An Adversarial Training Framework for Mitigating Algorithmic Biases in Clinical Machine Learning, npj Digit. Med. 6, 55 (2023), https://doi.org/10.1038/s41746-023-00805-y.
[24] Emily Black et al., Less Discriminatory Algorithms, 113 Geo. L.J. 53, 56 (2024).
[25] See generally id.; see also Upturn et al., Letter to Department of Justice regarding Comprehensive Use of Civil Rights Authorities to Prevent and Combat Algorithmic Discrimination, 4 (Feb. 1, 2024), https://www.upturn.org/static/files/2024-02-01%20Letter%20to%20DOJ%20re%20AI%20Executive%20Order%20Civil%20Rights.pdf.
[26] Stephen Hayes, Why “Disparate Impact” Is Good for Business, The Rooftop (June 17, 2025), https://www.newamerica.org/future-land-housing/blog/disparate-impact-good-for-business.
[27] See Tara K. Ramchandani, Why “Disparate Impact” Matters for Tackling Intentional Housing Discrimination, The Rooftop (June 17, 2025), https://www.newamerica.org/future-land-housing/blog/disparate-impact-intentional-housing-discrimination (“Disparate impact allows litigants to expose covert intentional discrimination that would otherwise go undetected.”).
[28] Order Granting in Part and Denying in Part Motion to Dismiss (Doc. 80), Mobley v. Workday, Inc., 3:23cv770 (N.D. Cal. July 12, 2024), available at https://storage.courtlistener.com/recap/gov.uscourts.cand.408645/gov.uscourts.cand.408645.80.0.pdf.
[29] Order Granting Preliminary Collective Certification (Doc. 128), Mobley v. Workday, Inc., 3:23cv770 (N.D. Cal. May 16, 2025), available at https://storage.courtlistener.com/recap/gov.uscourts.cand.408645/gov.uscourts.cand.408645.128.0.pdf. A collective action is a species of class action under 29 U.S.C. § 216(b).
[30] Order Re HiredScore Dispute (Doc. 158), Mobley v. Workday, Inc., 3:23cv770, at 1 (N.D. Cal. July 29, 2025), available at https://storage.courtlistener.com/recap/gov.uscourts.cand.408645/gov.uscourts.cand.408645.158.0.pdf. See also Caroline Colvin, Judge orders Workday to supply an exhaustive list of employers that enabled AI hiring tech, HR Dive (July 31, 2025), https://www.hrdive.com/news/workday-must-supply-list-of-employers-who-enabled-hiredscore-ai/756506.
[31] Kelsey Purcell, 2024 Applicant Tracking System (ATS) Usage Report: Key Shifts and Strategies for Job Seekers, Jobscan (July 14, 2025), https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems.
[32] ACLU Complaint to the FTC Regarding Aon Consulting, Inc. (May 30, 2024), https://www.aclu.org/documents/aclu-complaint-to-the-ftc-regarding-aon-consulting-inc;
[33] Complaint (Doc. No. 1), Louis v. SafeRent Solutions, No. 1:22cv10800, at 21 (D. Mass. May 25, 2022), https://clearinghouse.net/doc/160025.
[34] Louis v. SafeRent Solutions, LLC, 685 F. Supp. 3d 19 (D. Mass. July 26, 2023).
[35] Statement of Interest of the United States, Louis v. SafeRent Solutions, LLC, No. 1:22cv10800 (D. Mass. Jan. 9, 2023), https://www.justice.gov/crt/case-document/file/1562776/dl?inline.
[36] Press Release, Cohen Milstein, “Rental Applicants Using Housing Vouchers Settle Ground-Breaking Discrimination Class Action Against SafeRent Solutions” (Apr. 26, 2024), https://www.cohenmilstein.com/rental-applicants-using-housing-vouchers-settle-ground-breaking-discrimination-class-action-against-saferent-solutions.
[37] Connecticut Fair Housing Center v. CoreLogic Rental Property Solutions, LLC, No. 3:18cv705 (D. Conn. Apr. 4, 2018), https://www.cohenmilstein.com/wp-content/uploads/2023/07/CoreLogic-Complaint-04242018_0.pdf.
[38] Connecticut Fair Housing Center v. Corelogic Rental Property Solutions, LLC, No. 23-1118.
[39] Jeff Hirsch, Fair Housing Group Wins Voucher Discrimination Settlement, Evanston Now (Feb. 5, 2024), https://evanstonnow.com/fair-housing-group-wins-voucher-discrimination-settlement.
[40] Massachusetts Office of the Attorney General, Press Release, “AG Campbell Announces $2.5 Million Settlement With Student Loan Lender For Unlawful Practices Through AI Use, Other Consumer Protection Violations,” (July 10, 2025), https://www.mass.gov/news/ag-campbell-announces-25-million-settlement-with-student-loan-lender-for-unlawful-practices-through-ai-use-other-consumer-protection-violations.
[41] Assurance of Discontinuance, In the matter of Earnest Operations LLC, No. 2584-cv1895 (Mass. Super. Ct. July 8, 2025), https://www.mass.gov/doc/earnest-aod/download.
[42] Student Borrower Protection Center, Educational Redlining, 4 (2020), https://protectborrowers.org/wp-content/uploads/2020/02/Education-Redlining-Report.pdf.
[43] Relman Colfax PLLC, Fourth and Final Report of the Independent Monitor, Fair Lending Monitorship of Upstart Network’s Lending Model, 3, 8-12 (Mar. 27, 2024), https://www.relmanlaw.com/news-upstart-final-report.
[44] Id. at 12-13. Upstart defined MSIs as “schools where 80 percent or more of the student body are members of the same racial demographic group.” Id. at 12.
[45] Id. at 15.
[46] Interagency Task Force on Property Appraisal and Valuation Equity (PAVE), Action Plan to Advance Property Appraisal and Valuation Equity, 2-3 (March 2022), https://archives.hud.gov/pave.hud.gov/PAVEActionPlan.pdf; Junia Howell & Elizabeth Korver-Glenn, The Persistent Evaluation of White Neighborhoods as More Valuable Than Communities of Color (Nov. 2, 2022); https://static1.squarespace.com/static/62e84d924d2d8e5dff96ae2f/t/6364707034ee737d19dc76da/1667526772835/Howell+and+Korver-Glenn+Appraised_11_03_22.pdf; Andre Perry et al., The Devaluation of Black Assets: The Case of Residential Property, Brookings (Nov. 27, 2018), https://www.brookings.edu/articles/devaluation-of-assets-in-black-neighborhoods (finding that “owner-occupied homes in Black neighborhoods are undervalued by $48,000 per home on average”).
[47] See, e.g., Debra Kamin, Home Appraised With a Black Owner: $472,000. With a White Owner: $750,000, N.Y. Times (Aug. 18, 2022), https://www.nytimes.com/2022/08/18/realestate/housing-discrimination-maryland.html; Debra Kamin, Black Homeowners Face Discrimination in Appraisals, N.Y. Times (Aug. 25, 2020), https://www.nytimes.com/2020/08/25/realestate/blacks-minorities-appraisals-discrimination.html.
[48] Michael Neal et al., Urban Institute, How Automated Valuation Models Can Disproportionately Affect Majority-Black Neighborhoods (2020), https://www.urban.org/sites/default/files/publication/103429/how-automated-valuation-models-can-disproportionately-affect-majority-black-neighborhoods_1.pdf.
[49] Final Rule, Quality Control Standards for Automated Valuation Models, 89 Fed. Reg. 64538 (published Aug. 7, 2024, effective Oct. 1, 2025), https://www.federalregister.gov/documents/2024/08/07/2024-16197/quality-control-standards-for-automated-valuation-models.