The Leadership Conference’s Comments to DOJ on AI in Criminal Justice
View a PDF of the comments here.
May 28, 2024
Nancy La Vigne
Director
National Institute of Justice
810 7th Street, N.W.
Washington, D.C. 20531
Submitted via email
Subject: Public Input to Section 7.1(b) of E.O. 14110
Dear Director La Vigne:
On behalf of The Leadership Conference on Civil and Human Rights (The Leadership Conference)[i], its Center for Civil Rights and Technology, and the undersigned organizations, we write in response to the National Institute of Justice (NIJ), Office of Justice Programs, U.S. Department of Justice’s (DOJ) Request for Input from the Public on Section 7.1(b) of Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.[ii] The Leadership Conference, as a coalition and through the Center on Civil Rights and Technology, is actively engaged in policy advocacy to ensure civil rights remains at the center of the development and use of new technologies, especially where those technologies are rights and safety impacting.
NIJ’s Request for Input (RFI) seeks information to inform a report on the use of artificial intelligence (AI) in the criminal justice system. It is prompted by Executive Order 14110 (AI EO), which addresses trustworthiness in AI. The AI EO directs DOJ to:
- enforce existing Federal laws to address civil rights and civil liberties violations related to AI;
- ensure fair and impartial justice for all with respect to the use of AI in the criminal justice system, including establishing best practices, safeguards, and use limits for AI; and,
- advance the agency’s AI expertise, including assessing existing capacity to investigate impact on civil rights because of the use of AI.
People who face discrimination based on race, ethnicity, national origin, religion, sex, sexual orientation, gender identity, income, immigration status, or disability are more likely to be harmed by the use of AI systems. These communities often lack the resources to respond to harms when they occur. Harms caused by AI are well documented and span numerous sectors, including housing,[iii] employment,[iv] financial services and credit,[v] insurance,[vi] public health and health care,[vii] education,[viii] public accommodations,[ix] and government benefits and services.[x] Individuals are also harmed by the use of AI in the criminal justice system and policing.[xi]
The work of our federal agencies, like DOJ, affects the lives of people every day. If technology tools, such as AI systems, do not work, real people can suffer real harms. The stakes are high. In the criminal justice system, that can mean loss of liberty and denial of constitutional rights.
General Considerations
We agree with the Biden administration’s focus on ensuring technology tools, including AI systems, are consistent with our democratic values centered on civil rights and liberties and the protection of privacy. AI used in the criminal justice system should behave safely and equitably. AI systems should not be biased or discriminate against individuals or traditionally marginalized groups and communities. AI systems should benefit, not harm, the public. We support implementing requirements to ensure that these core AI principles, as well as those embodied in the AI Bill of Rights[xii], the recent Memorandum from the Office of Management and Budget (OMB), Advancing Governance, Innovation, and risk Management for Agency Use of Artificial Intelligence (AI M-Memo) and in Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI Executive Order), are followed. NIJ’s report should reflect these fundamental values.
This means that DOJ must be provided with the resources and capacity, including technical expertise, to investigate and enforce existing laws. DOJ should also coordinate and collaborate with other agencies, such as the Departments of Health and Human Services, Housing and Urban Development, Labor, as well as the financial services agencies, such as the Consumer Financial Protection Bureau and Office of the Comptroller of the Currency. Coordination among agencies through the announced Interagency Working Group is a promising step, if it results in progress toward protecting our civil rights where AI is used.
Those capabilities are also vital for DOJ as it establishes needed guardrails for how the agency uses AI. AI can exacerbate existing biases. For example, biased data used to train systems result in biased decisions and lead to harmful outcomes. Even where there is a “human in the loop,” a bias towards accepting decisions made by or assisted by AI without further query, could also lead to harmful results. Moreover, AI systems may not be “fit-for-purpose.” Using systems that are not assessed, tested, or monitored, and where where their capabilities and limitations are not known, could have serious consequences resulting in rights-impacting outcomes.
Furthermore, DOJ needs to be transparent and engage with all stakeholders, including those from the civil rights community and civil society, at every stage of the implementation of the AI EO and OMB guidance. Such engagement should occur as accountability structures are developed, internal processes for assessment and testing are created, and in connection with enforcement. Ongoing transparency and engagement with the civil rights community and civil society will also help achieve the AI EO’s goal of “advancing equity and civil rights.”
In a speech earlier this year at the University of Oxford, Deputy Attorney General Lisa Monaco noted the “promise and peril of AI.”[xiii] The Deputy AG went on to set expectations of how the use of AI by the agency should proceed:
So, hypothetically, if the Department wanted to use new AI systems to — say — assist in identifying a criminal suspect or support a sentencing decision — we must first rigorously stress test that AI application and assess its fairness, accuracy, and safety. And I want to be clear. These guardrails will be critical for the Department to do its job and deliver on its mission. The rule of law, the safety of our country, and the rights of Americans depend on it.
We agree. In these comments, we discuss the measures that DOJ should put in place to meet those goals.
Rights-Impacting Uses of AI in Policing and Criminal Justice
AI is already being used in the criminal justice system in rights-impacting ways, including:
- Video Analysis and Surveillance
- DNA Analysis
- Gunshot Detection
- Predictive Policing and Crime Forecasting
- Risk Assessment for Pretrial and Parole Decisions
- Sentencing Guidelines
- Reducing Bias and Discrimination
- Cybersecurity and Data Protection
- Case Management and Workflow Optimization
AI has also been used to train immigration officers in speaking with refugees. Because the system could have potential impact on determinations about refugees, even if it isn’t used to make a particular decision, it is still important to understand the system’s capabilities and limitations.[xiv] To make sure systems are being used as intended, the following questions must be answered: How is the system being assessed? What tests are being conducted to ensure that it is not rights-impacting? Where are the communities where it will be deployed engaged? How was the tool trained? How will AI-bias be mitigated? What are the oversight mechanisms?
Law Enforcement’s Use of Facial Recognition Technology
Facial recognition technology (FRT) provides an instructive case study. Use of FRT in policing and subsequent false arrests based on erroneous identifications demonstrate the need to avoid biased technologies. Similar AI systems can and are resulting in similar harms. That is why DOJ must be able to demonstrate that an AI system is safe and trustworthy and that it will produce intended, rights-protecting outcomes before it is put into use.
The Leadership Conference has spoken out against the use of facial recognition since 2016 and has a long history of weighing in on law enforcement issues. Our statement before Congress at a hearing on “Facial Recognition Technology: Examining its Use by Law Enforcement”[xv] further highlighted the inherent bias of FRT and its disparate impact on marginalized communities, and we incorporate that statement here by reference.
A Path Forward to Center AI Policy on Civil Rights
The use of AI in policing and criminal justice must be viewed holistically and in the broader context of ongoing and systemic issues communities face. AI policies must be grounded in a fundamental civil-rights focused paradigm for public safety. There are several roadmaps that provide a framework for moving forward.
Executive Order Directives. DOJ must be held accountable for the AI systems it is using or planning to use. Thus, as a first step, in addition to implementing the AI EO, DOJ should move forward with meeting the requirements of the February 2023 Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government[xvi], which, among other things, requires agencies to look at their own use of artificial intelligence and to use those systems in ways that advance equity.
Civil Rights Principles for the Era of Big Data. In 2014, a coalition of civil rights and media justice groups released the “Civil Rights Principles for the Era of Big Data[xvii]” calling on the U.S. government and businesses to respect and promote equal opportunity and equal justice in the development and use of data-driven technologies. These principles were updated in 2020.[xviii] They propose a set of civil rights protections that are relevant and can serve as a guide, including:
- Ending High-Tech Profiling
- Ensuring Justice in Automated Decisions
- Preserving Constitutional Principles
- Ensuring that Technology Serves People Historically Subject to Discrimination
- Defining Responsible Use of Personal Information and Enhancing Individual Rights
- Making Systems Transparent and Accountable
Vision for Justice. The Leadership Conference’s 2024 Vision for Justice [xix]platform, which we incorporate by reference, provides a framework for understanding the harm technology, including AI, poses when used in the criminal-legal system. It calls for the end of high-tech harm and the surveillance state as “the use of (these) data-driven technologies reproduce, exacerbate, and entrench the existing disparities within the criminal-legal system.” In addition to greater transparency and accountability, the platform calls on the federal government to place a moratorium on or outright ban a number of systems, software, and platforms that further entrench civil rights and civil liberties inequities in the criminal-legal system, especially for marginalized communities.
Specific Measures Needed to Ensure DOJ Safeguards Civil Rights Where AI is Used
Given the threat to civil rights and other harms caused by AI-driven technologies, for example facial recognition, the arguments for banning or pausing the use of this technology are apparent. However, despite ongoing concerns with AI systems, it is clear that the prevalence of the use of technology is already widespread and growing. A major question to answer is how the agency is held accountable for the AI systems they are using or planning to use. If the decision is made to use an AI system, the DOJ must implement risk management requirements, including pre-deployment impact assessments, real work testing, independent evaluations, and ongoing monitoring.
Civil rights and civil society organizations, including The Leadership Conference, have called for several immediate steps that DOJ can take to safeguard civil rights:
- Develop and issue an Interagency Policy Statement on algorithmic discrimination.
- This is in addition to the joint statement [xx]that existing legal authorities, including civil rights laws, apply to the use of automated decisions.
- The statement would provide a framework for how agencies will consider determining whether unlawful algorithmic discrimination exists.
- Establish an interagency working group to develop and expand the federal government’s own anti-discrimination testing capabilities to uncover algorithmic discrimination.
- The working group would develop and expand anti-discriminatory testing capabilities, assistance on enforcement cases, and other efforts to combat algorithmic discrimination.
- Require entities to perform anti-discrimination testing of their systems and search for less discriminatory algorithms.
- Agencies should take steps to clarify expectations of the private sector to proactively combat algorithmic discrimination.
- Require entities to collect demographic information for anti-discrimination purposes.
- Establish regular meetings with external stakeholders to provide regular updates on AI EO implementation and the agency’s ongoing efforts to combat discrimination caused by the use of AI, and get feedback, including concerns, from community groups.
Beyond this, we recommend that NIJ’s report includes an AI policy framework that ensures:
- AI Principles and Governance:
- DOJ should create, implement, and operationalize the AI principles in the AI Bill of Rights[xxi], AI Executive Order, and OMB M-Memo. These principles guide the development and deployment of AI systems, ensuring they align with fairness, transparency, and accountability.
- Appropriate governance structures should be in place to provide ongoing review and oversight of AI systems.
- Agencies must consider disparate impacts caused by the use of an AI system.
- Transparency and Explainability:
- AI algorithms used in the criminal justice system should be transparent. Developers and users need to understand how these algorithms work.
- Explainability is crucial. Users should be able to interpret the decisions made by AI systems, especially when it impacts people’s lives.
- Data Quality and Bias Mitigation:
- Biased data can lead to biased AI outcomes. Efforts should be made to eliminate bias from the data used to train AI algorithms.
- Regular Audits and Monitoring:
- Regular audits of AI systems can help identify bias and correct it promptly.
- Monitoring AI systems in real-world scenarios ensures that any unintended biases are detected and addressed.
- Community Engagement, Collaboration and Research:
- Collaboration between researchers, practitioners, impacted communities, and policymakers is essential. Sharing knowledge and best practices can lead to better AI solutions.
- Continued research on bias mitigation techniques and fairness in AI is crucial for improving criminal justice applications.
- Training for staff procuring or using AI.
Close Gaps in Existing Laws to Account for AI
We appreciate the AI EO’s recognition that existing civil rights laws should be enforced when AI is used. DOJ, working with the other agencies, must enforce these laws to mitigate harms caused by the use of AI. However, many laws, including those addressing civil rights, did not contemplate the existence of AI, much less its ubiquitous adoption. That is why any gaps in existing civil rights laws must be closed. As the Lawyers’ Committee for Civil Rights Under Law noted in comments filed in the FTC’s Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security[xxii]:
- Existing anti-discrimination laws have many gaps and limitations, such as Title II of the Civil Rights Act of 1964, exclude retail stores or have unresolved questions about how they apply to online businesses.
- Some federal civil rights laws are not comprehensive in the classes they protect. Sections 1981 and 1982 of the Civil Rights Act of 1866, and Title VI of the Civil Rights Act of 1964, only apply to race and national origin.
- Title II additionally applies to religion.48 But these core statutes do not apply to sex. The scope of classes protected by Section 1985, which prohibits conspiracies against civil rights and has been used to combat commercial discrimination, is unsettled.
Conclusion
We appreciate this opportunity to comment on the use of AI in the criminal justice system and to provide input on a practical and actionable approach to identify, measure, and mitigate harms before AI is put into use, as well as to evaluate existing systems.
We continue to believe that, for the use of AI to be successful, agencies must ensure that the benefits and risks of AI are considered early on and throughout the AI lifecycle, through design, development, and deployment. Before procuring or using an AI system, an agency should understand its limitations, recognize its intended uses as well as potential misuses, consider how to ensure the AI works for the people, and prevent harm. In the criminal justice area specifically, as noted in the Vision for Justice platform, there are times when AI simply should not be used.
Without clear guidance on how to ensure accountability, transparency, and explainability, agencies will fail to ensure oversight of algorithmic decision-making and facilitate risk of harm. Moreover, given that federal government procurement rules and purchasing practices often have a strong influence on markets, well-crafted procurement guidance can help set a baseline for responsible AI use. Likewise, DOJ’s approach to AI will set expectations for how state and local governments, including law enforcement, use AI. It is critical for those entities to ensure that their use of AI systems align with the policy goals to uplift civil rights in the use of technology.
Thank you for your consideration of our concerns and views. Please direct any questions about these comments to Koustubh “K.J.” Bagchi, vice president of the Center for Civil Rights and Technology at The Leadership Conference, at [email protected] or Frank Torres, privacy and AI fellow at The Leadership Conference, at [email protected].
Sincerely,
The Leadership Conference on Civil and Human Rights
African American Policy Forum
Bazelon Center for Mental Health Law
Japaanese American Citizens League (JACL)
National Association of Social Workers
National Disability Rights Network (NDRN)
United Church of Christ Media Justice Ministry
[i] The Leadership Conference is a coalition charged by its diverse membership of more than 240 national organizations to promote and protect the civil and human rights of all persons in the United States. Through its membership and its Media/Telecommunications Task Force, The Leadership Conference works to ensure that civil and human rights, equal opportunity, and democratic participation are at the center of communication, public education, and technology policy debates.
[ii] Request for Input from the Public on Section 7.1(b) of Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 89 FR 31771, April 25, 2024.
[iii] Valerie Schneider, Locked Out by Big Data: How Big Data, Algorithms and Machine Learning May Undermine Housing Justice, 52.1 Colum. Hum. Rts. L. Rev. 251, 254 (2020), https://blogs.law.columbia.edu/hrlr/files/2020/11/251_Schneider.pdf; Lauren Kirchner & Matthew Goldstein, Access Denied: Faulty Automated Background Checks Freeze Out Renters, The Markup & N.Y. Times (May 28, 2020), https://themarkup.org/locked-out/2020/05/28/access-denied-faulty-automated-background-checks-freeze-out-renters; Brief of the American Civil Liberties Union Foundation, The Lawyers’ Committee for Civil Rights Under Law, The National Fair Housing Alliance, and The Washington Lawyers’ Committee for Civil Rights and Urban Affairs, as Amici Curiae Supporting Appellant and Reversal, Opiotennione v. Bozzuto Mgmt. Co., No. 21-1919, (4th Cir. 2021), ECF No. 49-2, https://www.lawyerscommittee.org/wp-content/uploads/2022/08/3.-Opiotennione-v.-Bozzuto-Mgmt-Corp-amicus-brief.pdf.
[iv] Miranda Bogen & Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias, Upturn, (Dec. 2018), https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20–%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf; Alina Köchling & Marius Claus Wehner, Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decisionmaking in the context of HR recruitment and HR development, 13 Bus. Res. 795 (2020), https://doi.org/10.1007/s40685-020-00134-w.
[v] Julia Angwin et al., Minority Neighborhoods Pay Higher Car Insurance Premiums Than White Areas With the Same Risk, ProPublica (Apr. 5, 2017), https://www.propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-white-areas-same-risk; Maddy Varner & Aaron Sankin, Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders, The Markup (Feb. 25, 2020), https://themarkup.org/allstates-algorithm/2020/02/25/car-insurance-suckers-list.
[vi] Robert Bartlett et al., Consumer-Lending Discrimination in the FinTech Era (Nat’l Bureau Econ. Res. Working Paper No. 25943, 2019), https://www.nber.org/papers/w25943; Bertrand K. Hassani, Societal Bias reinforcement through machine learning: a credit scoring perspective, 1 AI & Ethics 239 (2020), https://link.springer.com/article/10.1007/s43681-020-00026-z.
[vii] Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Sci. 447 (2019), https://www.science.org/doi/10.1126/science.aax2342; Trishan Panch et al., Artificial intelligence and algorithmic bias: implications for health systems, 9 J. Glob. Health (2019) (offering definitions of algorithmic bias in health systems), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6875681/; Natalia Norori et al., Addressing bias in big data and AI for health care: A call for open science, 2 Patterns (2021), https://doi.org/10.1016/j.patter.2021.100347.
[viii] Todd Feathers, Major Universities Are Using Race as a “High Impact Predictor” of Student Success, The Markup (Mar. 2, 2021), https://themarkup.org/machine-learning/2021/03/02/major-universities-are-using-race-as-a-high-impact-predictor-of-student-success; Maureen Guarcello et al., Discrimination in a Sea of Data: Exploring the Ethical Implications of Student Success Analytics, Educause Rev. (Aug. 24, 2021), https://er.educause.edu/articles/2021/8/discrimination-in-a-sea-of-data-exploring-the-ethical-implications-of-student-success-analytics.
[ix] Alex P. Miller & Kartik Hosanagar, How Targeted Ads and Dynamic Pricing Can Perpetuate Bias, Harv. Bus. Review (Nov. 8, 2019), https://hbr.org/2019/11/how-targeted-ads-and-dynamic-pricing-can-perpetuate-bias); Jennifer Valentino-DeVries et al., Websites Vary Prices, Deals Based on Users’ Information, Wall St. J. (Dec. 24, 2012), https://www.wsj.com/articles/SB10001424127887323777204578189391813881534.
[x] Rashida Richardson et al., Litigating Algorithms 2019 Report: New Challenges to Government Use of Algorithmic Decision Systems, AI Now Inst. (Sept. 2019), https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf; Hadi Elzayn et al., Measuring and Mitigating Racial Disparities in Tax Audits 3–4 (Stan. Inst. for Econ. Pol’y Rsch., Working Paper, Jan. 30, 2023), https://dho.stanford.edu/wp-content/uploads/IRS_Disparities.pdf.
[xi] Aaron Sankin et al., Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them, The Markup (Dec. 2, 2021), https://themarkup.org/prediction-bias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates-them; Todd Feathers, Gunshot-Detecting Tech is Summoning Armed Police to Black Neighborhoods, Vice: Motherboard (July 19, 2021), https://www.vice.com/en/article/88nd3z/gunshot-detecting-tech-is-summoning-armed-police-to-black-neighborhoods.
[xii] Blueprint for an AI Bill of Rights | OSTP | The White House, 14 October 2022.
[xiii] “Deputy Attorney General Lisa O. Monaco Delivers Remarks at the University of Oxford on the Promise and Peril of AI” (Feb. 14, 2024)
[xiv] US explores AI to train immigration officers on talking to refugees | Reuters, 7 May 2024.
[xv] Hearing On “Facial Recognition Technology: Examining Its Uses By Law Enforcement” (civilrights.org), 13 July 2021.
[xvi] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House, 30 October 2023.
[xvii] Civil Rights Leaders Announce Principles to Protect Civil Rights and Technology, (October 21, 2020).
[xviii] Id.
[xix] Vision for Justice Platform Relaunched to Envision New Paradigm for Public Safety – The Leadership Conference on Civil and Human Rights (civilrights.org), 11 April 2024.
[xx] Office of Public Affairs | Justice Department’s Civil Rights Division Joins Officials from CFPB, EEOC and FTC Pledging to Confront Bias and Discrimination in Artificial Intelligence | United States Department of Justice, 25 April 2023.
[xxi] FTC privacy comments FINAL (lawyerscommittee.org), 21 Nov. 2022.
[xxii] FTC privacy comments FINAL (lawyerscommittee.org), 21 Nov. 2022.