- Who Will Pay for the AI Boom?, The Economist (July 31, 2025), https://www.economist.com/business/2025/07/31/who-will-pay-for-the-trillion-dollar-ai-boom.
- See Reva Schwartz et al., Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, Special Publication, National Institute of Standards and Technology, at ii (2022), https://doi.org/10.6028/NIST.SP.1270 (“Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system.”).
- Matthew Kosinski, IBM, What Is Black Box AI? (Oct. 29, 2024), https://www.ibm.com/think/topics/black-box-ai (defining black box AI as an AI in which “[u]sers can see the system’s inputs and outputs, but they can’t see what happens within the AI tool to produce those outputs”).
- ReNika Moore, Sept. 16, 2025, in conversation with Author.
- See Chiraag Bains, The Legal Doctrine that Will Be Key to Preventing AI Discrimination, Brookings (Sept. 13, 2024), https://www.brookings.edu/articles/the-legal-doctrine-that-will-be-key-to-preventing-ai-discrimination.
- President Donald J. Trump, Executive Order 14281, Restoring Equality of Opportunity and Meritocracy, 90 Fed. Reg. 17537 (Apr. 23, 2025), https://www.federalregister.gov/documents/2025/04/28/2025-07378/restoring-equality-of-opportunity-and-meritocracy.
- Id.; Final Rule, Rescinding Portions of Department of Justice Title VI Regulations to Conform More Closely With the Statutory Text and to Implement Executive Order 14281, 90 Fed. Reg. (Dec. 10, 2025), https://www.govinfo.gov/content/pkg/FR-2025-12-10/pdf/2025-22448.pdf.
- President Donald J. Trump, Executive Order 14365, Ensuring a National Policy Framework for Artificial Intelligence 90 Fed. Reg. 58499 (Dec. 11, 2025), https://www.federalregister.gov/public-inspection/2025-23092/artificial-intelligence-efforts-to-ensure-national-policy-framework-eo-14365. See Charlie Bullock, Legal Obstacles to Implementation of the AI Executive Order (Dec. 2025), https://law-ai.org/legal-obstacles-to-implementation-of-the-ai-executive-order.
- See, e.g., 42 U.S.C. § 2000d (Title VI of the Civil Rights Act of 1964); 42 U.S.C. § 2000e-2(a) (Title VII of the Civil Rights Act of 1964); 29 U.S.C. § 623(a) (Age Discrimination and Employment Act, 1967); 42 U.S.C. § 3604 (Fair Housing Act, 1968); 20 U.S.C. § 1681 (Title IX of the Educational Amendments of 1972); 29 U.S.C. § 794 (Section 504 of the Rehabilitation Act of 1973).
- The text of several of these statutes strongly suggested they should be read to prohibit unjustified discriminatory effects. For example, Title VII forbade employers to “limit, segregate, or classify” employees “in any way which would deprive or tend to deprive any individual of employment opportunities or otherwise adversely affect his status as an employee” based on race, color, religion, sex, or national origin. 42 U.S.C. § 2000e-2(a)(2). The Age Discrimination and Employment Act contained the same textual prohibition based on age. 29 U.S.C. § 623(a)(2). The Voting Rights Act originally outlawed the application of procedures to “deny or abridge” the right to vote on account of race or color. Pub. L. No. 94-73 (amended in 1982 to prohibit the application of procedures “in a manner which results in a denial or abridgement,” 52 U.S.C. § 10301). The Fair Housing Act made it unlawful to “make” housing “unavailable” based on race, color, religion, and national origin, and later sex, disability, and familial status. 42 U.S.C. § 3604(a), (f).
- See Olatunde C. Johnson, The Agency Roots of Disparate Impact, 49 HARV. C.R.-C.L. L. REV. 125, 127 133-34, 138-39 (2014) (arguing that agency action in the immediate wake of the Civil Rights Act’s passage “allows us to understand disparate impact not as a separate offshoot of antidiscrimination law invented by courts, but as a reasonable agency implementation choice given the potentially broad and conflicting meanings of the antidiscrimination directive of civil rights law”).
- 29 Fed. Reg. 16298, 16299 (Dec. 4, 1964), codified at 45 C.F.R. § 80.3(b)(2).
- 42 U.S.C. § 2000e-2(h).
- Alfred W. Blumrosen, Strangers in Paradise: Griggs v. Duke Power Co. and the Concept of Employment Discrimination, 71 MICH. L. REV. 59, 60-61, 64 (1972); Johnson, supra note 11, at 134, 140-41.
- 401 U.S. 424 (1971).
- Id. at 431.
- Id. at 426, 429-30 & n.6.
- 347 U.S. 483 (1954).
- Griggs, 401 U.S. at 427.
- Id.
- Id. at 432 (emphasis in the original); see also id. (“good intent or absence of discriminatory intent does not redeem employment procedures or testing mechanisms that operate as ‘built-in headwinds’ for minority groups and are unrelated to measuring job capability”).
- Id. at 431. The Court also reviewed Title VII’s legislative history and validated the EEOC’s view that Title VII exempts only employment tests that are job-related. Id. at 433-34.
- Id. at 431-33.
- Albemarle Paper Co. v. Moody, 422 U.S. 405, 425 (1975).
- Wards Cove Packing Co. v. Atonio, 490 U.S. 642, 650-51 (1989) (“a comparison . . . between the racial composition of the qualified persons in the labor market and the persons holding at-issue jobs . . . generally forms the proper basis for the initial inquiry in a disparate impact case”), superseded on other grounds by Civil Rights Act of 1991, Pub. L. No. 102-166, 105 Stat. 1071 (1991); Hazelwood Sch. Dist. v. United States, 433 U.S. 299, 308 (1977) (“a proper comparison was between the racial composition of Hazelwood’s teaching staff and the racial composition of the qualified public school teacher population in the relevant labor market”).
- Jones v. City of Boston, 752 F.3d 38, 43-44, 46-47 & n.9 (1st Cir. 2014).
- Albemarle Paper Co., 422 U.S. at 425 (cleaned up).
- Id.
- See Civil Rights Act of 1991, Pub. L. No. 102-166, 105 Stat. 1071, § 3(3) (1991) (listing among its purposes “to confirm statutory authority and provide statutory guidance for the adjudication of disparate impact suits under title VII”).
- See, e.g., Chiraag Bains, What Just Happened: The Trump Administration’s Dismissal of Voting Rights Lawsuits, Just Security (May 27, 2025) (explaining that results claims under Section 2 of the Voting Rights Act “differ from disparate impact claims in important ways,” including that “VRA plaintiffs must adduce evidence in certain enumerated categories concerning past and present discrimination”), https://www.justsecurity.org/113745/wjh-trump-dismissal-voting-rights-lawsuits.
- See Bains, The Legal Doctrine that Will Be Key to Preventing AI Discrimination, supra note 5.
- FinRegLab, The Use of Machine Learning for Credit Underwriting, 9-10, 12 (2021), https://finreglab.org/wp-content/uploads/2023/12/FinRegLab_2021-09-16_Research-Report_The-Use-of-Machine-Learning-for-Credit-Underwriting_Market-and-Data-Science-Context.pdf.
- Ariana Mihan et al., Artificial Intelligence Bias in the Prediction and Detection of Cardiovascular Disease. npj Cardiovasc Health, 1-2 (2024), https://doi.org/10.1038/s44325-024-00031-9.
- Cole Stryker, IBM, What Is Training Data? (May 2, 2025), https://www.ibm.com/think/topics/training-data.
- Rina Diane Caballar, IBM, Generative AI vs. Predictive AI: What’s the Difference? (Aug. 9, 2024), https://www.ibm.com/think/topics/generative-ai-vs-predictive-ai-whats-the-difference.
- Joy Buolamini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Res. 81:1-15 (2018), https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf; see also Patrick Gother et al., National Institute of Science and Technology (NIST), Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects (2019), https://doi.org/10.6028/NIST.IR.8280 (analyzing 189 facial recognition algorithms and finding elevated false-positive rates for East Asian and Black faces).
- Maarten Sap et al., The Risk of Racial Bias in Hate Speech Detection, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 1668-70 (2019), https://doi.org/10.18653/v1/p19-1163.
- See Aylin Caliskan et al., Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics, Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 156-170 (2022), https://doi.org/10.1145/3514094.3534162; Tessa E.S. Charlesworth et al., Gender Stereotypes in Natural Language: Word Embeddings Show Robust Consistency Across Child and Adult Language Corpora of More Than 65 Million Words, Science, 32(2), 218-240 (2021), https://doi.org/10.1177/0956797620963619; Tolga Bolukbasi et al., Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings (2016), https://doi.org/10.48550/arXiv.1607.06520.
- Aylin Caliskan et al., Semantics Derived Automatically from Language Corpora Contain Human-like Biases, SCIENCE 356.6334, 183-186 (2017), http://opus.bath.ac.uk/55288; Aylin Caliskan, Detecting and Mitigating Bias in Natural Language Processing, Brookings (May 10, 2021), https://www.brookings.edu/articles/detecting-and-mitigating-bias-in-natural-language-processing.
- Jieyu Zhao et al., Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-level Constraints, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, ACL, pages 2979–2989, 2980 (2017).
- Jeffrey Dastin, Insight – Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 11, 2018), https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG.
- Ziad Obermeyer et al., Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations, Science 366(6464): 447-453 (2019), https://doi.org/10.1126/science.aax2342.
- Reva Schwartz et al., NIST Special Publication 1270, Toward a Standard for Identifying and Managing Bias in Artificial Intelligence, 10, 33 (2022), https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf; Klas Leino et al., Feature-Wise Bias Amplification, ICLR (2019), https://arxiv.org/abs/1812.08999.
- Alexandra George, Thwarting Bias in AI Systems, Carnegie Mellon University Engineering News (Dec. 2018), https://engineering.cmu.edu/news-events/news/2018/12/11-datta-proxies.html.
- See, e.g., Nathan Kallus et al., Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination, Management Science 68(3):1959-1981 (2021), https://doi.org/10.1287/mnsc.2020.3850.
- See Christine Lindquist, Racial Equity Considerations When Using Recidivism as a Core Outcome in Reentry Program Evaluations, RTI International & Center for Court Innovation, at 1 (2021), https://nationalreentryresourcecenter.org/sites/default/files/inline-files/racialEquityRecidivismBrief.pdf; Sandra G. Mayson, Bias In, Bias Out, 128 Yale L. J. 2218, 2221 n.4, 2251-52 (2019), https://www.yalelawjournal.org/pdf/Mayson_p5g2tz2m.pdf.
- Danielle Ensign et al., Runaway Feedback Loops in Predictive Policing, Proceedings of the 1st FAccT Conference, PMLR 81:160-171 (2018), https://proceedings.mlr.press/v81/ensign18a.html.
- Rob Reich et al., System Error: Where Big Tech Went Wrong and How We Can Reboot, 102 (2021); Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World, 7 (2018); Meredith Broussard, More than a Glitch, 2 (2023).
- See Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 Harv. J.L. & Tech. 889, 906-21 (2018).
- See The Leadership Conference on Civil and Human Rights, The Innovation Framework: A Civil Rights Approach to AI (2025), https://innovationframework.org. See also Pauline T. Kim, Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action, 110 Cal. L. Rev. 1539, 1544, 1574-86 (2022) (discussing various de-biasing techniques and noting their lawfulness under anti-discrimination law, explaining that “many efforts to eliminate problematic features that cause bias in algorithms are more accurately characterized as non-discriminatory efforts to remove unfairness, rather than ‘reverse discrimination’”).
- See, e.g., Yunyi Li et al., Mitigating Label Bias via Decoupled Confident Learning, AI & HCI Workshop at 40th ICML (2023), https://doi.org/10.48550/arXiv.2307.08945; Jieyu Zhao et al., Learning Gender-Neutral Word Embeddings, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4847–4853 (2018); Michael Feldman et al., Certifying and Removing Disparate Impact, ACM SIGKDD Conf. on Knowledge Discovery & Data Mining (2015), https://doi.org/10.48550/arXiv.1412.3756.
- Nicholas Schmidt & Bryce Stephens, An Introduction to Artificial Intelligence and Solutions to the Problems of Algorithmic Discrimination, 73 Quarterly Report 130, 142 https://arxiv.org/pdf/1911.05755.
- In AI development, a “red team” is a “structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.” NIST, Computer Security Research Center, Glossary: Artificial Intelligence Red-Teaming, https://csrc.nist.gov/glossary/term/artificial_intelligence_red_teaming (last visited Jan. 6, 2026).
- See id.; Jenny Yang et al., An Adversarial Training Framework for Mitigating Algorithmic Biases in Clinical Machine Learning, npj Digit. Med. 6, 55 (2023), https://doi.org/10.1038/s41746-023-00805-y.
- Emily Black et al., Less Discriminatory Algorithms, 113 L.J. 53, 56 (2024).
- See generally id.; see also Upturn et al., Letter to Department of Justice regarding Comprehensive Use of Civil Rights Authorities to Prevent and Combat Algorithmic Discrimination, 4 (Feb. 1, 2024), https://www.upturn.org/static/files/2024-02-01%20Letter%20to%20DOJ%20re%20AI%20Executive%20Order%20Civil%20Rights.pdf.
- Stephen Hayes, Why “Disparate Impact” Is Good for Business, The Rooftop (June 17, 2025), https://www.newamerica.org/future-land-housing/blog/disparate-impact-good-for-business.
- See Tara K. Ramchandani, Why “Disparate Impact” Matters for Tackling Intentional Housing Discrimination, The Rooftop (June 17, 2025), https://www.newamerica.org/future-land-housing/blog/disparate-impact-intentional-housing-discrimination (“Disparate impact allows litigants to expose covert intentional discrimination that would otherwise go undetected.”).
- Order Granting in Part and Denying in Part Motion to Dismiss (Doc. 80), Mobley v. Workday, Inc., 3:23cv770 (N.D. Cal. July 12, 2024), available at https://storage.courtlistener.com/recap/gov.uscourts.cand.408645/gov.uscourts.cand.408645.80.0.pdf.
- Order Granting Preliminary Collective Certification (Doc. 128), Mobley v. Workday, Inc., 3:23cv770 (N.D. Cal. May 16, 2025), available at https://storage.courtlistener.com/recap/gov.uscourts.cand.408645/gov.uscourts.cand.408645.128.0.pdf. A collective action is a species of class action under 29 U.S.C. § 216(b).
- Order Re HiredScore Dispute (Doc. 158), Mobley v. Workday, Inc., 3:23cv770, at 1 (N.D. Cal. July 29, 2025), available at https://storage.courtlistener.com/recap/gov.uscourts.cand.408645/gov.uscourts.cand.408645.158.0.pdf. See also Caroline Colvin, Judge orders Workday to supply an exhaustive list of employers that enabled AI hiring tech, HR Dive (July 31, 2025), https://www.hrdive.com/news/workday-must-supply-list-of-employers-who-enabled-hiredscore-ai/756506.
- Kelsey Purcell, 2024 Applicant Tracking System (ATS) Usage Report: Key Shifts and Strategies for Job Seekers, Jobscan (July 14, 2025), https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems.
- ACLU Complaint to the FTC Regarding Aon Consulting, Inc. (May 30, 2024), https://www.aclu.org/documents/aclu-complaint-to-the-ftc-regarding-aon-consulting-inc;
- Complaint (Doc. No. 1), Louis v. SafeRent Solutions, No. 1:22cv10800, at 21 (D. Mass. May 25, 2022), https://clearinghouse.net/doc/160025.
- Louis v. SafeRent Solutions, LLC, 685 F. Supp. 3d 19 (D. Mass. July 26, 2023).
- Statement of Interest of the United States, Louis v. SafeRent Solutions, LLC, No. 1:22cv10800 (D. Mass. Jan. 9, 2023), https://www.justice.gov/crt/case-document/file/1562776/dl?inline.
- Press Release, Cohen Milstein, “Rental Applicants Using Housing Vouchers Settle Ground-Breaking Discrimination Class Action Against SafeRent Solutions” (Apr. 26, 2024), https://www.cohenmilstein.com/rental-applicants-using-housing-vouchers-settle-ground-breaking-discrimination-class-action-against-saferent-solutions.
- Connecticut Fair Housing Center v. CoreLogic Rental Property Solutions, LLC, No. 3:18cv705 (D. Conn. Apr. 4, 2018), https://www.cohenmilstein.com/wp-content/uploads/2023/07/CoreLogic-Complaint-04242018_0.pdf.
- Connecticut Fair Housing Center v. Corelogic Rental Property Solutions, LLC, No. 23-1118.
- Jeff Hirsch, Fair Housing Group Wins Voucher Discrimination Settlement, Evanston Now (Feb. 5, 2024), https://evanstonnow.com/fair-housing-group-wins-voucher-discrimination-settlement.
- Massachusetts Office of the Attorney General, Press Release, “AG Campbell Announces $2.5 Million Settlement With Student Loan Lender For Unlawful Practices Through AI Use, Other Consumer Protection Violations,” (July 10, 2025), https://www.mass.gov/news/ag-campbell-announces-25-million-settlement-with-student-loan-lender-for-unlawful-practices-through-ai-use-other-consumer-protection-violations.
- Assurance of Discontinuance, In the matter of Earnest Operations LLC, No. 2584-cv1895 (Mass. Super. Ct. July 8, 2025), https://www.mass.gov/doc/earnest-aod/download.
- Student Borrower Protection Center, Educational Redlining, 4 (2020), https://protectborrowers.org/wp-content/uploads/2020/02/Education-Redlining-Report.pdf.
- Relman Colfax PLLC, Fourth and Final Report of the Independent Monitor, Fair Lending Monitorship of Upstart Network’s Lending Model, 3, 8-12 (Mar. 27, 2024), https://www.relmanlaw.com/news-upstart-final-report.
- Id. at 12-13. Upstart defined MSIs as “schools where 80 percent or more of the student body are members of the same racial demographic group.” Id. at 12.
- Id. at 15.
- Interagency Task Force on Property Appraisal and Valuation Equity (PAVE), Action Plan to Advance Property Appraisal and Valuation Equity, 2-3 (March 2022), https://archives.hud.gov/pave.hud.gov/PAVEActionPlan.pdf; Junia Howell & Elizabeth Korver-Glenn, The Persistent Evaluation of White Neighborhoods as More Valuable Than Communities of Color (Nov. 2, 2022); https://static1.squarespace.com/static/62e84d924d2d8e5dff96ae2f/t/6364707034ee737d19dc76da/1667526772835/Howell+and+Korver-Glenn+Appraised_11_03_22.pdf; Andre Perry et al., The Devaluation of Black Assets: The Case of Residential Property, Brookings (Nov. 27, 2018), https://www.brookings.edu/articles/devaluation-of-assets-in-black-neighborhoods (finding that “owner-occupied homes in Black neighborhoods are undervalued by $48,000 per home on average”).
- See, e.g., Debra Kamin, Home Appraised With a Black Owner: $472,000. With a White Owner: $750,000, N.Y. Times (Aug. 18, 2022), https://www.nytimes.com/2022/08/18/realestate/housing-discrimination-maryland.html; Debra Kamin, Black Homeowners Face Discrimination in Appraisals, N.Y. Times (Aug. 25, 2020), https://www.nytimes.com/2020/08/25/realestate/blacks-minorities-appraisals-discrimination.html.
- Michael Neal et al., Urban Institute, How Automated Valuation Models Can Disproportionately Affect Majority-Black Neighborhoods (2020), https://www.urban.org/sites/default/files/publication/103429/how-automated-valuation-models-can-disproportionately-affect-majority-black-neighborhoods_1.pdf.
- Final Rule, Quality Control Standards for Automated Valuation Models, 89 Fed. Reg. 64538 (published Aug. 7, 2024, effective Oct. 1, 2025), https://www.federalregister.gov/documents/2024/08/07/2024-16197/quality-control-standards-for-automated-valuation-models.
- President Donald J. Trump, Executive Order 14281, Restoring Equality of Opportunity and Meritocracy, 90 Fed. Reg. 17537 (Apr. 23, 2025), https://www.federalregister.gov/documents/2025/04/28/2025-07378/restoring-equality-of-opportunity-and-meritocracy.
- Id.
- Agencies had already removed key guidance documents from their websites in the early days of the Trump Administration. See, e.g., EEOC, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023), available at https://data.aclum.org/storage/2025/01/EOCC_www_eeoc_gov_laws_guidance_select-issues-assessing-adverse-impact-software-algorithms-and-artificial.pdf; U.S. Department of Housing and Urban Development, Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing (April 29, 2024), available at https://www.fairhousingnc.org/wp-content/uploads/2024/08/FHEO_Guidance_on_Screening_of_Applicants_for_Rental_Housing.pdf.
- See Alexander v. Sandoval, 532 U.S. 275 (2001) (holding that Title VI’s statutory prohibition on discrimination, Section 601, prohibits only intentional discrimination, and that there is no private right of action to enforce disparate impact regulations promulgated under Section 602, meaning only the federal government can enforce them).
- Executive Order 14281.
- Final Rule, Rescinding Portions of Department of Justice Title VI Regulations, supra note 7.
- 90 Fed. Reg. 50901 (Nov. 13, 2025), https://www.federalregister.gov/documents/2025/11/13/2025-19864/equal-credit-opportunity-act-regulation-b.
- Id.
- Executive Order 14365, supra note 6.
- Trump issued another order about AI that also warrants comment. Executive Order 14319, Preventing Woke AI in the Federal Government, announced that the federal government—the world’s largest buyer—would only purchase generative AI systems developed in accordance with “ideological neutrality.” 90 Fed. Reg. 35389 (July 23, 2025), https://www.federalregister.gov/documents/2025/07/28/2025-14217/preventing-woke-ai-in-the-federal-government. By way of definition, the order specifies that large language models must not “encode” diversity, equity, and inclusion in their outputs. Technologists have rightly pointed out that this mandate “positions one ideological perspective as the default standard for neutrality,” and “efforts to align models” with it “risk introducing new distortions.” Amy Winecoff & Chinmay Deshpande, Center for Democracy & Technology, Anti-Woke AI Is a Technical Mirage (Aug. 8, 2025), https://cdt.org/insights/anti-woke-ai-is-a-technical-mirage. Indeed, by preventing developers from addressing the known biases discussed in this paper, Trump’s “woke AI” order may actually require discriminatory design as a condition of AI vendors obtaining federal contracts.
- Albemarle, 422 U.S. at 405 (a plaintiff must show that employment tests “select applicants for hire or promotion in a racial pattern significantly different from that of the pool of applicants”); 29 C.F.R. § 1607.16(Q) (defining “adverse impact” as a “substantially different rate of selection in hiring, promotion or other employment decision which works to the disadvantage of members of a race, sex, or ethnic group”).
- See EEO Leaders’ Statement on Disparate Impact, President Trump’s Executive Order on Disparate Impact Analysis Is Legally Incorrect and Will Undermine Meritocracy and Equal Employment Opportunity (May 2025), https://bit.ly/3F7A6bh (“[T]he entire concept of disparate impact is that unjustified and significant differences in outcome resulting from a ‘neutral’ policy means that people of different races or sexes are not being given an equal opportunity to succeed.”).
- 42 U.S.C. § 2000e-2(j).
- 557 U.S. 557 (2009).
- Id.
- Id. at 594 (Scalia, J., concurring).
- Zachary Best & Stephen Hayes, Executive Order on Disparate Impact: An Explainer, 3 (May 9, 2025) (“No court has ever held that disparate impact runs afoul of the Constitution.”), https://www.relmanlaw.com/media/cases/1965_Executive%20Order%20on%20Disparate%20Impact%20Explainer.pdf.
- Fisher v. Univ. of Texas at Austin (Fisher I), 570 U.S. 297 (2013); Fisher v. Univ. of Texas at Austin (Fisher II), 579 U.S. 365 (2016). See also id. at 532 (Thomas, J., dissenting) (describing the state law establishing the Top Ten Percent plan as “facially race-neutral law” that “served to equalize competition between students who live in relatively affluent areas with superior schools and students in poorer areas” and “tended to benefit African-American and Hispanic students, who are often trapped in inferior public schools”).
- Reva Siegel, Race-Conscious but Race-Neutral: The Constitutionality of Disparate Impact in the Roberts Court, 66 L. Rev. 653, 672-78 (2013). Justice Kennedy, the author of both Fisher opinions, had previously explained that policymakers could pursue race-conscious goals through race-neutral means. See Parents Involved in Cmty. Schs. v. Seattle Sch. Dist. No. 1, 551 U.S. 701, 789 (2006) (Kennedy, J., concurring in part and concurring in the judgment) (“School boards may pursue the goal of bringing together students of diverse backgrounds and races through other means, including strategic site selection of new schools; drawing attendance zones with general recognition of the demographics of neighborhoods; allocating resources for special programs; recruiting students and faculty in a targeted fashion; and tracking enrollments, performance, and other statistics by race. These mechanisms are race conscious but do not lead to different treatment based on a classification that tells each student he or she is to be defined by race, so it is unlikely any of them would demand strict scrutiny to be found permissible.”). Indeed, Justice Scalia himself had acknowledged as much 20 years before Ricci. See City of Richmond v. J.A. Croson, 488 U.S. 469, 526 (1989) (Scalia, J., concurring in the judgment) (“A State can, of course, act to undo the effects of past discrimination in many permissible ways that do not involve classification by race.”) (internal quotation marks omitted).
- Tex. Dep’t of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, Inc., 576 U.S. 519 (2015).
- See Samuel R. Bagenstos, Disparate Impact and the Role of Classification and Motivation in Equal Protection Law after Inclusive Communities, 101 Cornell L. Rev. 1115, 1127-28 (2016) (“Because the Fair Housing Act does not expressly provide for disparate-impact liability, if a majority of the Court had serious constitutional concerns about disparate impact claims per se, the Court would likely have avoided the constitutional problem by reading the statute not to provide for such claims. By holding that the Fair Housing Act does provide for disparate-impact liability, the Court must therefore have rejected the argument that disparate impact law is unconstitutional.”).
- Inclusive Cmtys., 576 U.S. at 536-37.
- See, e.g., California Fair Employment and Housing Act, Cal. Gov’t Code § 12955.8(b); Colorado Anti-Discrimination Act, C.R.S. §§ 24-34-402, 24-34-502; Illinois Human Rights Act, 775 I.L.C.S. 5/2-102; Mass. Gen. Laws ch. 151B § 4; N.J.S.A. §§ 13:13-2.5, 13:13-3.4(f)(2), 13:13-4.11; Washington Law Against Discrimination, R.C.W. § 49.60.
- See, e.g., Colorado SB 24-205, Consumer Protections for Artificial Intelligence, codified at C.R.S. § 6-1-1701 et seq. (2024), https://leg.colorado.gov/bills/sb24-205; Illinois H.B. 3773, amending the Illinois Human Rights Act (2024), https://legiscan.com/IL/bill/HB3773/2023; New Jersey Attorney General, Division on Civil Rights, Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination (2025), https://www.nj.gov/oag/newsreleases25/2025-0108_DCR-Guidance-on-Algorithmic-Discrimination.pdf.
- See Charlie Bullock, supra note 8 (stating that the case for preemption under the Commerce Clause is “legally dubious and unlikely to succeed in court); Gibson Dunn, President Trump’s Latest Executive Order on AI Seeks to Preempt State Laws (Dec. 15, 2025), https://www.gibsondunn.com/president-trump-latest-executive-order-on-ai-seeks-to-preempt-state-laws (explaining why DOJ’s preemption arguments “are unlikely to be successful” and why the contemplated FCC and FTC actions would not be a basis for preemption).