The Leadership Conference Comments to OSTP on National AI Strategy

View PDF of these comments here.

Re: Request for Information: National Priorities for Artificial Intelligence, Docket Number: OSTP-TECH-2023-0007

Dear Director Prabhakar:

On behalf of The Leadership Conference on Civil and Human Rights (The Leadership Conference) and its Media/Telecommunications Task Force, we write in response to the Office of Science and Technology Policy’s (OSTP) Request for Information (RFI) on National Priorities for Artificial Intelligence.[i] The Leadership Conference, a coalition charged by its diverse membership of more than 230 national organizations to promote and protect the rights of all persons in the United States, and its Media/Telecommunications Task Force, work to ensure that civil and human rights, equal opportunity, and democratic participation are at the center of communication and technology policy debates.

We appreciate the Biden-Harris administration’s continued commitment to equity and civil rights related to the development and use of artificial intelligence (AI) and OSTP’s leadership in that effort. We welcome the opportunity to contribute to a National AI Strategy for mitigating the risks and maximizing the benefits of AI.

The National AI Strategy must be centered on equity and civil rights

With the widespread use of AI, individuals are grappling with the impacts of discriminatory automated systems in just about every facet of life, leading to loss of economic opportunities, higher costs or denial of loans and credit, adverse impact on their employment or ability to get a job, lower quality health care, and barriers to housing.

The Biden-Harris administration has already taken key steps to begin the process of mitigating the risks of AI, including Executive Order 14091 (“Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government”),[ii] the AI Bill of Rights,[iii] NIST’s AI Risk Management Framework,[iv] the 2023 National AI Research and Development Strategic Plan,[v] and the plan for the National AI Research Resource.[vi] Executive Order 14091, for example, instructs federal agencies to use all of their available authorities to combat algorithmic discrimination.[vii] The National AI R&D Strategic Plan calls for investments in technical research to develop frameworks for accountability, fairness, privacy, and bias as well as research to understand and mitigate the social and ethical risks of AI.[viii] These efforts provide a strong basis for a National AI Strategy that is centered on equity and civil rights.

We recognize that there are other areas of concern that merit attention, including national security, the ability to compete globally, and marketplace competition. None of the desired outcomes in those areas are achievable if AI systems are not trustworthy — if they are not safe, effective, and equitable. Likewise, we note the longer-term concern that some have raised about the existential threats of AI. But, as global leaders have pointed out, discrimination is a concrete, imminent harm that is already affecting people today, relative to the speculative harm of “existential” threats. There are concrete policy actions that we can take right now to address these harms and thereby help prevent tomorrow’s crisis.[ix]

This view is supported by a group of leading researchers and experts who recently issued a statement underscoring how mitigating current harms will lay a strong foundation for regulating future risks.[x]  These experts said:

“From the dangers of inaccurate or biased algorithms that deny life-saving healthcare to language models exacerbating manipulation and misinformation, our research has long anticipated harmful impacts of AI systems of all levels of complexity and capability. This body of work also shows how to design, audit, or resist AI systems to protect democracy, social justice, and human rights. This moment calls for sound policy based on the years of research that has focused on this topic. We already have tools to help build a safer technological future, and we call on policymakers to fully deploy them.”

Information Requested in the RFI

Protecting rights, safety, and national security:

The Leadership Conference, joined by more than 60 civil rights and civil society organizations, including members of The Leadership Conference, has identied several concrete measures that should not only inform, but should also be directly incorporated into, a National AI Strategy.[xi] These points will help ensure that administration policy and actions on the development and use of AI are consistent with the administration’s goals of ensuring equity and civil rights:

  1. Make the AI Bill of Rights binding administration policy through OMB guidance. The AI Bill of Rights is a landmark document that lays out clear values, provides the basis for effective policy, and documents the harms that can occur without rules and enforcement. Now OMB, DPC, and OSTP must work together to ensure the forthcoming OMB guidance on the use of AI by the U.S. government requires that the steps identified in the AI Bill of Rights are taken for all automated systems it develops, deploys, uses, acquires, or otherwise funds. The guidance should be responsive to the requirements of the AI in Government Act of 2020 and the Advancing American AI Act and should update past OMB guidance on the regulation of AI. In doing so, the guidance should demonstrate the administration’s commitment to enacting Executive Order 14091, which states that artificial intelligence and other automated systems should be designed, developed, acquired, and used in a manner that advances equity. All agencies should be required to adhere to the guidance, and it should be developed with input from the public.
  2. Ensure coordinated follow-through by federal agencies in light of Executive Order 14091 and the AI Bill of Rights, including actions outlined in the October 2022 fact sheet and needed additional efforts. Key actions outlined in the fact sheet should be completed without delay, including implementation of new guidance from the Department of Housing and Urban Development on tenant screening algorithms. The EEOC, OSHA, Justice Department, and Labor Department can issue and enforce further guidance on hiring tech, algorithmic worker management, and workplace surveillance. The Justice Department can further the EO’s mandate to “[protect] the public from algorithmic discrimination” by ensuring that the funding, procurement, and use of law enforcement technologies and other criminal-legal technologies advance equitable public safety and criminal justice practices, particularly for communities who often experience adverse disparate racial impacts. The recent joint statement from the Justice Department’s Civil Rights Division, CFPB, FTC, and EEOC provides another model for educating the public about algorithmic discrimination and the applicability of existing federal laws. Other agencies should also be urged to act on the AI Bill of Rights by engaging the public and issuing guidance about the impacts of AI in their relevant sectors. The White House can and should support them in doing so, for example, by fully staffing the National AI Initiative Office and ensuring the effective functioning of the Interagency Policy Committee on AI and Equity.
  3. Launch sustained public engagement with diverse stakeholders to raise awareness of AI harms and support meaningful public and private sector efforts to combat algorithmic discrimination. As communities, businesses, and policymakers across all levels of government grapple with the effects of AI on people’s daily lives, the administration can play an important role as a convenor, source of expertise, and driver of public education. As it did in the technical companion for the AI Bill of Rights, the administration can amplify examples of AI-driven harms and underscore the responsibility of developers and deployers of automated systems to address them. The administration can convene diverse experts to strengthen public dialogue about effective approaches to identifying and mitigating algorithmic harms, including by modeling participatory processes that center impacted communities. As one example, the administration must ensure that efforts to develop sector-specific “profiles” for the NIST AI Risk Management Framework have strong oversight that includes White House participation to ensure quality control and robust participation from civil society, civil rights organizations, and impacted communities. The administration must also ensure that AI-related discussions taking place in the U.S.-EU Trade & Technology Council — agreements such as the Indo-Pacific Economic Framework, the Council of Europe Committee on AI, and other internationally focused efforts — include and reflect the input of U.S. civil society stakeholders, while moving forward on the ideals of Executive Order 14091. Through these steps and others, the White House and agencies across the administration can advance public dialogue about potential AI risks and appropriate remediation.

As we stated in comments to the National Telecommunications and Information Administration’s Request for Comments on AI Accountability,[xii] “now is the time to act.” A National AI Strategy centered on civil rights can help to “shift focus from principles to durable, measurable, enforceable, and robust safeguards for the use of AI.”

Advancing equity and strengthening civil rights:

In its RFI, OSTP asks about the “unique considerations for understanding the impacts of AI systems on underserved communities” and “additional considerations or measures [that] are needed to assure that AI mitigates algorithmic discrimination, advances equal opportunity, and promotes positive outcomes for all,” mainly when used in sectors such as health care, employment, and transportation.[xiii] As OSTP develops the National AI Strategy, it should consider the disproportionate impact that algorithmic discrimination has on communities of color and other historically marginalized groups. OSTP should prioritize the principles outlined in the AI Bill of Rights to mitigate these harms. Any possible benefits of AI cannot be realized without centering a National AI Strategy on civil rights and equity, including a whole-of-society plan for implementing the Blueprint for an AI Bill of Rights.

People who face discrimination on the basis of race, ethnicity, national origin, religion, sex, sexual orientation, gender identity, income, immigration status, or disability are more likely to be harmed by automated systems and often lack the resources to respond to harms when they occur. These harms span numerous sectors, including housing,[xiv] employment,[xv] financial services and credit,[xvi] insurance,[xvii] public health and health care,[xviii] education,[xix] public accommodations,[xx] government benefits and services,[xxi] and policing.[xxii]

The National AI Strategy must also ensure that AI systems are usable by all members of our community.  People with disabilities continue to face accessibility challenges in using AI systems, despite the executive order calling for accessbililty. Developers need to intentionally include disabled people by building systems that conform to accessibility standards. Developers should also consider the impact that differences in language may have to ensure accessibility for the communities where AI systems are used.

Researchers have documented how algorithmic systems produce discriminatory outcomes that impair equal opportunity and erode civil rights protections. In the context of housing, for instance, algorithmic systems limit the ability of Black people and other people of color to access mortgage loans and public housing. A review of more than 2 million mortgage applications found that Black applicants were 80 percent more likely to be rejected by mortgage approval algorithms when compared with similar White applicants.[xxiii] And, in 2023, reporters discovered that the algorithmic scoring system used by the Los Angeles Homeless Services Authority discriminated against Black and Latino people, giving White applicants higher priority in the agency’s housing system.[xxiv]

Rather than leveling the playing field, algorithmic systems disproportionately disadvantage communities of color, perpetuating systemic bias and discrimination in the online economy. These disparities can occur because algorithmic systems, such as those used to evaluate prospective mortgage applicants, are trained using vast troves of data that reflect existing societal biases or inequities.[xxv] These data are built on generations of redlining and segregation that restricted opportunities for Black Americans to access wealth, education, employment, and healthy environments, baking these inequities into present-day technologies.[xxvi] These datasets are compiled using predatory commercial surveillance practices that undermine privacy rights and fuel data-driven discrimination.

AI systems claiming to read people’s emotions carry significant risks of discrimination in various contexts, including hiring interviews, schools, and other settings. In China, emotion recognition technology (ERT) is being used to monitor students’ emotions and evaluate their responses to classwork. These ERT systems capture specific biometric information, such as facial muscle points, through the camera on students’ computers or tablets and then attempt to identify emotions like happiness, sadness, anger, surprise, and fear.

ERT implementation in education can potentially exacerbate oppressive dynamics, particularly for Black and Brown students. Notably, it is widely acknowledged that Black students face a disproportionate number of suspensions and disciplinary actions compared to their White counterparts, even for similar behaviors. AI systems that claim to recognize emotions may also discriminate on the basis of disability because they fail to accurately evaluate how some individuals’ disabilities may impact facial features or emotional affect or expression.  The National Disable Law Students Association issued a great report on students’ experiences and concerns with similar algorithmic technology in test proctoring, including the bar exam.[xxvii]

Additionally, a study exploring racialized perceptions of emotions and bias among prospective teachers concluded that teachers are more inclined to interpret the facial expressions of Black boys and girls as angry, regardless of their actual emotions.[xxviii] Consequently, if racially biased emotion recognition technology is deployed in such problematic situations, it could amplify existing inequalities and oppression.

AI systems that claim to draw inferences about emotional states constitute an unacceptable intrusion into people’s private life, eroding our right to privacy and freedom of thought. The right to freedom of thought encompasses the right to safeguard an individual’s innermost thoughts and opinions from unwanted external scrutiny or judgment, the right not to have our thoughts and opinions manipulated, and the assurance that we won’t be penalized for what we think or believe.

An alarming risk to civil and human rights arises when ERT systems are deployed to identify potentially dangerous or aggressive protestors, resulting in the preemptive arrest of someone before they engage in any aggressive actions. In this scenario, the reliability of the inference becomes irrelevant. The real-life tangible consequences of an arrest are undeniable and threaten our fundamental rights of freedom of expression and assembly.

Despite these harms, many AI technologies lack appropriate safeguards, such as data minimization requirements, pre- and post-deployment impact assessments, and protections against algorithmic discrimination — all features in the administration’s own Blueprint for an AI Bill of Rights — to ensure that systems are safe, effective, and free of bias. And, due to the “black box” nature of many algorithmic systems, impacted communities frequently lack transparency about when and how AI is used, denying individuals recourse to contest errors or rights violations when they occur.

A National AI Strategy should build on the work already done to address the use of AI by outlining a whole-of-society plan for implementing the Blueprint for an AI Bill of Rights. This plan should address how federal regulators and the private sector can apply the AI Bill of Rights to new and existing AI technologies. To do so, the National AI Strategy should outline concrete steps for deploying impact assessments, disparity testing and mitigation, data minimization and privacy protections, transparency and explainability requirements, and human alternatives for both near-term and long-term innovations.

The National AI Strategy should also consider measures for ensuring that the federal government is well-equipped to respond to current and future instances of algorithmic discrimination. For instance, the National AI Strategy could address how federal agencies can prepare to enforce consumer protection and civil rights laws in the context of AI, such as evaluating the need for increased technical talent or resources for enforcement agencies. This includes expanding or creating within each agency an Office of Civil Rights. These offices need to be resourced, including with technical experts, to help ensure that issues related to civil rights and the use of AI are adequately and appropriately addressed.

Promote economic growth and good jobs:

A National AI Strategy must include a focus on mitigating job losses and harnessing technology for the growth of good jobs. Such a strategy should ensure that the administration’s roadmap to support good jobs adapts to the potential impacts of AI.[xxix] AI has the ability to harm workers by displacing them from their jobs, lowering job quality, increasing unemployment, and worsening inequalities.[xxx] It is up to policymakers to shape whether AI benefits or harms workers.

A central plan to address AI’s potential for economic growth and the creation of good jobs is critical. The National AI Strategy should focus on mitigating adverse economic impact and job losses, including by enhancing labor protections, banning certain AI uses or practices that are potentially discriminatory or violate worker privacy,[xxxi] increasing transparency about data and business practices, ensuring collective bargaining addresses AI issues, and using tax policy to direct AI development in a positive direction by considering both AI taxes or tax credits to reward the development or adoption of labor-augmenting AI. The National AI Strategy should ensure that displaced workers have an adequate social safety net, which will require a modernization of Unemployment Insurance (UI),[xxxii] exploring job guarantees if AI job losses are widespread,[xxxiii] and ensuring the proper lessons are learned from models like the DOL’s Trade Adjustment Assistance (TAA) and adopted for AI-related job losses. Reskilling and retraining existing workers should include ensuring new jobs are quality jobs that include collective bargaining rights and that increase the workforce participation for minority and historically disadvantaged communities in areas where new jobs or skills arise from the development and use of AI tools and systems.

As new innovations are brought to scale, the harms associated with AI will only occur with greater frequency and worse outcomes, underscoring the need for a National AI Strategy that is centered on equity and civil rights.[xxxiv] Without addressing the harmful and discriminatory effects of algorithmic systems, we risk replicating the inequities of past centuries of redlining and segregation in the modern-day online economy.

Investing in public services:

There is much discussion about using AI systems to deliver public services, including benefits. The same protections discussed earlier in these comments apply for such uses.  Algorithmic systems in benefits can cause a number of harms, including by making determinations based on corrupt and discriminatory data, arriving at determinations with no explanation, and being shrouded behind a lack of transparency.  In addressing the use of AI systems related to public benefits, the National AI Strategy should include (1) increased involvement of impacted communities in developing and assessing algorithmic systems, (2)  protections for due process, transparency, and equal protection through regulation and legislation, and (3) establishing minimum standards for the use of AI in benefits determinations, beginning prior to development and continuing after the system is put into place.

Conclusion

As discussed in these comments, a National AI Strategy must include clear measures to ensure that AI is equitable and that civil rights are protected. Thank you for your consideration of our recommendations. We look forward to continuing to work with the administration and OSTP on this issue and others of importance to our country. If you have any questions about this letter, please contact Media/Telecommunications Task Force Co-Chair David Brody, managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under Law, at [email protected], or Jonathan Walter, media/tech policy counsel at The Leadership Conference, at [email protected].

Sincerely,

The Leadership Conference on Civil and Human Rights
Access Now
American Association of People with Disabilities
American Civil Liberties Union
Anti-Defamation League
Center for American Progress
Communications Workers of America (CWA)
Lawyers’ Committee for Civil Rights Under Law
National Consumer Law Center
National Fair Housing Alliance

 

[i] Request for Information; National Priorities for Artificial Intelligence, Office of Science and Technology Policy, 88 FR 34 194, May 26, 2023.

[ii] White House, Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government (2023), https://www.whitehouse.gov/briefing-room/presidential-actions/2023/02/16/executive-order-on-further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/ [hereinafter E.O. 14091].

[iii] See Blueprint.

[iv] See National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (2023), https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

[v] See National Science and Technology Council, National Artificial Intelligence Research and Development Strategic Plan 2023 Update (2023), https://www.whitehouse.gov/wp-content/uploads/2023/05/National-Artificial-Intelligence-Research-and-Development-Strategic-Plan-2023-Update.pdf [hereinafter Nation AI R&D].

[vi] See National Artificial Intelligence Research Resource Task Force, Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research Resource (2023), https://www.ai.gov/wp-content/uploads/2023/01/NAIRR-TF-Final-Report-2023.pdf.

[vii] E.O. 14091 at 10, 831.

[viii] Nation AI R&D at 12-13

[ix] “Stop talking about tomorrow’s AI doomsday when AI posses risks today,” nature, 27 June 2023, Stop talking about tomorrow’s AI doomsday when AI poses risks today (nature.com)

[x] ACM FAccT – Statement on AI Harms and Policy (facctconference.org); “Former White House advisors and tech researchers co-sign new statement against AI harms,” Venture Beat, June 15, 2023, Former White House advisors and tech researchers co-sign new statement against AI harms | VentureBeat.

[xi] Next Steps to Advance Equity and Civil Rights in Artificial Intelligence and Technology Policy,” Letter to the Domestic Policy Council, OSTP, and OMB, June 13, 2023, and signed by over 63 civil rights and civil society organizations, DPC-OSP-OMB-AI-Letter.pdf (civilrightsdocs.info).

[xii] The Leadership Conference on Civil and Human Rights Comments in response to NTIA’s Request for Comments on AI Accountability, Docket No. NTIA-2023-0005, June 13, 2023,  Leadership Conference Comments to NTIA on AI Accountability – The Leadership Conference on Civil and Human Rights (civilrights.org)

[xiii] RFI at 34,195.

[xiv] Valerie Schneider, Locked Out by Big Data: How Big Data, Algorithms and Machine Learning May Undermine Housing Justice, 52.1 Colum. Hum. Rts. L. Rev. 251, 254 (2020), https://blogs.law.columbia.edu/hrlr/files/2020/11/251_Schneider.pdf; Lauren Kirchner & Matthew Goldstein, Access Denied: Faulty Automated Background Checks Freeze Out Renters, The Markup & N.Y. Times (May 28, 2020), https://themarkup.org/locked-out/2020/05/28/access-denied-faulty-automated-background-checks-freeze-out-renters; Brief of the American Civil Liberties Union Foundation, The Lawyers’ Committee for Civil Rights Under Law, The National Fair Housing Alliance, and The Washington Lawyers’ Committee for Civil Rights and Urban Affairs, as Amici Curiae Supporting Appellant and Reversal, Opiotennione v. Bozzuto Mgmt. Co., No. 21-1919, (4th Cir. 2021), ECF No. 49-2, https://www.lawyerscommittee.org/wp-content/uploads/2022/08/3.-Opiotennione-v.-Bozzuto-Mgmt-Corp-amicus-brief.pdf.

[xv] Miranda Bogen & Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias, Upturn, (Dec. 2018), https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20–%20Help%20Wanted%20-%20An%20Exploration%‌20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf; Alina Köchling & Marius Claus Wehner, Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decisionmaking in the context of HR recruitment and HR development, 13 Bus. Res. 795 (2020), https://doi.org/10.1007/s40685-020-00134-w.

[xvi] Robert Bartlett et al., Consumer-Lending Discrimination in the FinTech Era (Nat’l Bureau Econ. Res. Working Paper No. 25943, 2019), https://www.nber.org/papers/w25943; Bertrand K. Hassani, Societal Bias reinforcement through machine learning: a credit scoring perspective, 1 AI & Ethics 239 (2020), https://link.springer.com/article/10.1007/s43681-020-00026-z.

[xvii] Julia Angwin et al., Minority Neighborhoods Pay Higher Car Insurance Premiums Than White Areas With the Same Risk, ProPublica (Apr. 5, 2017), https://www.propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-white-areas-same-risk; Maddy Varner & Aaron Sankin, Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders, The Markup (Feb. 25, 2020), https://themarkup.org/allstates-algorithm/2020/02/25/car-insurance-suckers-list.

[xviii] Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Sci. 447 (2019), https://www.science.org/doi/10.1126/science.aax2342; Trishan Panch et al., Artificial intelligence and algorithmic bias: implications for health systems, 9 J. Glob. Health (2019) (offering definitions of algorithmic bias in health systems), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6875681/; Natalia Norori et al., Addressing bias in big data and AI for health care: A call for open science, 2 Patterns (2021), https://doi.org/10.1016/j.patter.2021.100347.

[xix] Todd Feathers, Major Universities Are Using Race as a “High Impact Predictor” of Student Success, The Markup (Mar. 2, 2021), https://themarkup.org/machine-learning/2021/03/02/major-universities-are-using-race-as-a-high-impact-predictor-of-student-success; Maureen Guarcello et al., Discrimination in a Sea of Data: Exploring the Ethical Implications of Student Success Analytics, Educause Rev. (Aug. 24, 2021), https://er.educause.edu/articles/2021/8/discrimination-in-a-sea-of-data-exploring-the-ethical-implications-of-student-success-analytics.

[xx] Alex P. Miller & Kartik Hosanagar, How Targeted Ads and Dynamic Pricing Can Perpetuate Bias, Harv. Bus. Review (Nov. 8, 2019), https://hbr.org/2019/11/how-targeted-ads-and-dynamic-pricing-can-perpetuate-bias); Jennifer Valentino-DeVries et al., Websites Vary Prices, Deals Based on Users’ Information, Wall St. J. (Dec. 24, 2012), https://www.wsj.com/articles/SB10001424127887323777204578189391813881534.

[xxi] Rashida Richardson et al., Litigating Algorithms 2019 Report: New Challenges to Government Use of Algorithmic Decision Systems, AI Now Inst., (Sept. 2019), https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf; Hadi Elzayn et al., Measuring and Mitigating Racial Disparities in Tax Audits 3–4 (Stan. Inst. for Econ. Pol’y Rsch., Working Paper, Jan. 30, 2023), https://dho.stanford.edu/wp-content/uploads/IRS_Disparities.pdf.

[xxii] Aaron Sankin et al., Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them, The Markup (Dec. 2, 2021), https://themarkup.org/prediction-bias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates-them; Todd Feathers, Gunshot-Detecting Tech is Summoning Armed Police to Black Neighborhoods, Vice: Motherboard (July 19, 2021), https://www.vice.com/en/article/88nd3z/gunshot-detecting-tech-is-summoning-armed-police-to-black-neighborhoods.

[xxiii] Emmanuel Martinez & Lauren Kirchner, The Secret Bias Hidden in Mortgage-Approval Algorithms, The Markup & Associated Press (Aug. 25, 2021), https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms.

[xxiv] Colin Lecher & Maddy Venrer, Black and Latino Homeless People Rank Lower on L.A.’s Housing Priority List, L.A. Times & The Markup (Feb. 28, 2023), https://www.latimes.com/california/story/2023-02-28/black-latino-homeless-people-housing-priority-list-los-angeles.

[xxv] Jane Chung, Racism In, Racism Out: A Primer on Algorithmic Racism, Public Citizen (2022), https://www.citizen.org/article/algorithmic-racism/; White House Office of Science & Technology Policy, Blueprint for an AI Bill of Rights (2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-ofRights.pdf [hereinafter Blueprint].

[xxvi] Yeshimabeit Milner & Amy Traub, Data Capitalism and Algorithmic Racism, Data for Black Lives and Demos (2021), https://www.demos.org/sites/default/files/2021-05/Demos_%20D4BL_Data_Capitalism_Algorithmic_Racism.pdf.

[xxvii] “Report on Concerns Regarding Online administration of Bar Exams,” The National Disabled Law Students Association, July 29, 2020, NDLSA_Online-Exam-Concerns-Report1.pdf.

[xxviii] “Prospective Teachers Misperceive Black Children as Angry,” American Psychological Association, July 2, 2023, Prospective teachers Misperceive Black children as angry (apa.org).

[xxix] https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/16/biden-harris-administration-roadmap-to-support-good-jobs/

[xxx] Daron Acemoglu and Simon Johnson, “What’s Wrong with ChatGPT?” Project Syndicate, February 6, 2023, available at

https://www.project-syndicate.org/commentary/chatgpt-ai-big-tech-corporate-america-investing-in-eliminating-workers-by-daron-acemoglu-and-simon-johnson-2023-02; Daron Acemoglu and Simon Johnson, “Big Tech Is Bad. Big A.I. Will Be Worse,” The New York Times, June 9, 2023, available at https://www.nytimes.com/2023/06/09/opinion/ai-big-tech-microsoft-google-duopoly.html.

[xxxi] https://www.whitehouse.gov/ostp/news-updates/2023/05/01/hearing-from-the-american-people-how-are-automated-tools-being-used-to-surveil-monitor-and-manage-workers/

[xxxii] https://www.americanprogress.org/article/temporary-expansions-made-unemployment-insurance-a-lifesaver-workers-need-long-term-reform-to-keep-it-that-way/

[xxxiii] https://www.americanprogress.org/article/blueprint-21st-century/

[xxxiv] EPIC, Generating Harms: Generative AI’s Impact & Paths Forward (2023), https://epic.org/wp-content/uploads/2023/05/EPIC-Generative-AI-White-Paper-May2023.pdf.