Leadership Conference OMB AI Guidance Memo Comments

View a PDF of the comments here.

December 5, 2023

Clare Martorana
U.S. Federal Chief Information Officer
Office of the Federal Chief Information Officer
Office of Management and Budget
725 17th Street, N.W.
Ste. 50001
Washington, D.C.  20503

Submitted electronically via www.regulations.gov

Re: Request for Public Comment on Draft Memorandum – Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (AI); FR Doc. 2023-24269, 23 Nov. 2023.

Dear Ms. Martorana,

On behalf of The Leadership Conference on Civil and Human Rights (The Leadership Conference), its Center for Civil Rights and Technology, and the undersigned organizations, we write in response to the Office of Management and Budget’s (OMB) Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum (Memo).  The Leadership Conference is a coalition charged by its diverse membership of more than 240 national organizations to promote and protect the civil and human rights of all persons in the United States.  Through its membership, its Center for Civil Rights and Technology, and its Media/Telecommunications Task Force, The Leadership Conference works to ensure that civil and human rights, equal opportunity, and democratic participation are at the center of communications, public education, and technology policy debates. We have been actively engaged in policy development to ensure civil rights are central to the development and use of new technologies, especially where those technologies are rights and safety impacting.

Introduction
As AI systems and tools are being adopted across a broad range of agency tasks, including those that impact individuals who are the most marginalized, the provisions of the OMB Memo are imperative.

With the Memo, OMB is taking a significant step in ensuring AI is effectively governed across the federal government.  New agency requirements and guidance for AI governance, innovation, and risk mitigation, including through specific risk management practices for uses of AI that impact the rights and safety of the public, are critical.  While AI has the potential to improve operations and efficiency across the federal government, those outcomes will only be achieved if people impacted by those systems trust the decisions being made and are not harmed by them. The Memo provides actionable guidance to agencies that sets out how to ensure the use of AI upholds our democratic values and earns that trust.

Here we underscore important elements of the Memo that should not be diluted as the draft is finalized, as well as offer some suggestions for improvement:

AI systems and tools must be shown to be safe, trustworthy and enabling of right protecting outcomes before it is put into use.  The Memo centers equity and rights.  In an August 4, 2023 letter to the White House, leading civil rights and civil society organizations said, “(F)ederal agencies funding, acquiring, or using an AI system have a responsibility to ensure that the system works and is fit for purpose.”  The groups further urged that the federal government should not use AI systems unless they are shown to be effective and safe. No definition of safe and effective is meaningful unless it is explicit and clear that it includes being non-discriminatory and non-violative of civil and human rights.  Simply put, AI should work, and work for everyone.[i]  Therefore, it is right that protecting civil liberties and advancing equity should be at the center of the Memo.  The American public should be protected against existing and potential harms from AI, including threats to people’s rights, opportunities, jobs, economic well-being, and access to critical resources and services.

The marginalized communities served by the different agencies across the federal government are those that bear the most risk from the use of untested and unsafe AI systems.  People expect that risks associated with other regulated products will be identified, mitigated, and made known.  Likewise, we expect that AI technology is safe and effective –i.e. that it works for everyone.

The Memo builds on administration actions to ensure equitable AI and address AI harms. People who face discrimination on the basis of race, ethnicity, national origin, religion, sex, sexual orientation, gender identity, income, immigration status, or disability are more likely to be harmed by automated systems and often lack the resources to respond to harms when they occur. These harms are well documented and span numerous sectors, including housing,[ii] employment,[iii] financial services and credit,[iv] insurance,[v] public health and health care,[vi] education,[vii] public accommodations,[viii] government benefits and services,[ix] and policing.[x]

The Memo continues the measures this administration has already taken to begin the process of mitigating the risks of AI, including Executive Order 14091 (“Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government”),[xi] the AI Bill of Rights, NIST’s AI Risk Management Framework,[xii] the 2023 National AI Research and Development Strategic Plan,[xiii] and the plan for the National AI Research Resource.[xiv] Executive Order 14091, for example, instructs federal agencies to use all of their available authorities to combat algorithmic discrimination.[xv] The National AI R&D Strategic Plan calls for investments in technical research to develop frameworks for accountability, fairness, privacy, and bias, as well as research to understand and mitigate the social and ethical risks of AI.[xvi] These efforts provide a strong basis for a national AI strategy that is centered on equity and civil rights. These efforts, and their operationalization as detailed in the Memo, are crucial.

The Memo puts in place appropriate and reasonable requirements. We appreciate the administration’s continued commitment to equity and civil rights related to the development and use of AI. These values underpin our democracy and are reflected in the Memo. Prior to procuring, using, or funding powerful new AI technology, agencies must also ensure that the technology works. That means that the technology has had sufficient, transparent testing to ensure that it will produce intended, fair, equitable, and unbiased results and will not produce inequitable outcomes for historically disadvantaged groups.

Ideally, these AI systems would be designed, procured, and deployed with equity in mind, wherever possible. The Memo delivers on this outcome by providing appropriate and reasonable guidance to agencies looking to use AI. These requirements in the Memo are critical to achieving the administration’s goal to advance equity and protect civil rights and need to be kept in the final version.

The Memo puts the rights-protecting principles of the AI Bill of Rights into practice across agencies. The Memo lists categories of “right-impacting” AI uses that trigger risk assessment and mitigation requirements. It also provides a process for identifying future rights-impacting AI that would put those uses in scope and trigger risk management and other requirements detailed in the Memo.  Recognition of use cases that are rights or safety impacting is significant and necessary for agencies to take action to implement appropriate safeguards.  Not surprisingly given the ubiquitous adoption of AI, that list is broad.  Given the rapid speed of the creation of new technology, and the use of existing system in new ways, it is important that the Memo provides the opportunity to add additional AI use cases.  It is also vital that the Memo calls for transparency, as well as human review and recourse, and applies those requirements across the federal government. Taken together, these safeguards will help ensure that the American public will be broadly protected, as well as realize potential benefits.

By focusing on agency governance structures, the Memo appropriately includes provisions that will help ensure AI is properly managed.  A sound AI governance structure, including through the appointment of Chief AI Officers (CAIOs), actionable measures to ensure cross-agency coordination, and staff training, is an important starting point. A clear reporting structure, with defined roles and responsibilities and with appropriate authority and resources, will be necessary for the implementation of the Memo’s requirements.  Cross-agency coordination, and collaboration across the federal government, will help ensure consistency in the implementation of AI policy.  Staff guidance is also critical to meet the Memo’s objectives.  For example, training on how to conduct assessments, real-world testing, and ongoing monitoring, as well as determinations called for by the Memo, such as whether an AI use case is rights-impacting, will help ensure compliance with it.  Compliance plans will help keep agencies on track and accountable for achieving consistency with the Memo.  Likewise, detailed AI use case inventories, as well as reporting on use cases deemed to be not subject to the inventory, will help provide the means to hold agencies accountable.

The Memo appropriately focuses on ensuring that innovation is equitable and used to benefit the public. As AI becomes more prevalent in society, we must consider both the benefits and challenges of incorporating these tools into daily life, including by government agencies.  With the widespread use of AI, individuals are grappling with the impacts of discriminatory rights-impacting systems, leading to the loss of economic opportunities, higher costs or denial of loans and credit, adverse impact on their employment or ability to get a job, lower quality healthcare, and barriers to housing.  Just because an AI system is available does not mean it should be put to use.  Questions about the systems, including whether they are suitable or fit-for-purpose, as well as whether better alternatives are available, must be asked.

Even with these concerns, we recognize that AI offers the potential to expand opportunities and ensure people are treated fairly, but only if innovation is equitable.  There cannot be responsible AI without equitable innovation.  To that end, the Memo should be clear that equitable innovation, not just innovation, is a priority and should include metrics for agencies to facilitate ensuring public benefit remains at the core of use of technology.  This is an aspect of AI where civil society and civil rights groups, including those organizations that represent diverse communities, can provide constructive views as agencies move forward to implement the Memo.

Risks from right-impacting and safety-impacting AI must be managed.  Just as we know of the harms biased and broken AI systems can cause, we know what can be done to identify, prevent, or mitigate those harms.  The Memo includes concrete, measurable, and scalable actions that agencies will be required to take to ensure that AI systems work, and risks are managed.  In fact, these actions reflect practices already discussed in tech policy, such as the EU AI Regulation, and reflect the “responsible AI principles” adopted by many companies and industry sectors.  We agree that agencies should consider disparate impact relative to the use of AI and that requirement should remain in the Memo.

The Memo’s success will rest on how well it guides agencies toward more beneficial and equitable outcomes and whether it focuses agencies on making decisions based on democratic values, such as fairness, safety, privacy, inclusiveness, transparency, and accountability.  To that end, the Memo includes the following sound requirements: implementation of risk management requirements; pre-deployment impact assessments; real world testing; independent evaluations and ongoing monitoring; requirements for explainability and transparency; training for staff procuring or using AI; a requirement to consult with affected groups; and the need for agencies to provide for remedies and recourse.

Clarifying provisions of the Memo will close potential loopholes and addressing shortcomings and gaps. Clarifications of some aspects of the Memo will assist agencies in complying with the Memo, help to ensure accountability, and enable the government to achieve the Memo’s objectives, furthering the administration’s goals.

Beyond this, OMB should look to improve the Memo by doing the following:

Build clearer parameters around when CAIOs can seek waivers or exceptions from having to meet risk management requirements.  Providing greater clarity, and imposing limits on the circumstances where a waiver or exception is granted, is needed so that the public can be assured that AI systems work as intended.  The Memo provides the CAIO with significant control to grant waivers and exceptions.  The Memo should include a check on that authority.  For example, there should be a process for recourse or appeal to another senior official where there is disagreement about a decision about a waiver or exception.  In addition, more certainty is needed related to the factors used to determine whether to exclude or provide waivers on designating AI uses cases as rights-impacting.  Specifically:

  • When a waiver or exception is granted, there must be the ability to seek reconsideration of that decision. For example, should new information, such as testing or real-world experience, indicate that harm is occurring because of AI, use of that system should cease until it can be reevaluated.
  • The Memo should be clear that waivers and exceptions sunset so the system can be periodically reevaluated.
  • The Memo should require that agencies consider less rights-impacting alternatives before granting waivers or exceptions.
  • The Memo should require that agencies be transparent about the waivers or exceptions that have been granted through public reporting.

Provide the opportunity to the public to request AI use cases be designated as rights or safety impacting.  Impacted communities and the public at large should be provided the opportunity to request that use cases be evaluated as rights or safety impacting.  Those who may be subject to the technology are often in the best position to identity harms and potential harms.  To the extent a use case may not have been designated as rights or safety impacting, or an existing system is used for a new purpose, communities should have a path outlined in the Memo to engage with agencies.

Fully resource and fund agency governance structures including the establishment of Civil Rights Offices to meet the challenges identified in the Memo. 

Provide clarity to ensure that federal grants, especially to state and local governments, are also covered by the Memo.

Provide guidance with public input to assist agency staff with responsible implementation of the Memo.  It is unclear whether there will be adequate guidance for agency staff tasked with implementing the Memo.  For example, guidance on acceptable testing and tolerances will be critical in assessing areas such as whether a system is “fair.”

Provide clarity and more details on how to put easily understandable public reporting into practice, including related to the AI use case inventories.  In the August letter, the signatories also urged the administration to expand the required questions for AI use case inventories required under EO 13960 and OMB M-21-06 guidance.[xvii] Use case inventories should include information that would allow the public to assess adherence with the White House’s AI Executive Order and the Memo.  OMB should also publish the inventories in a format that is understandable and usable by the American people.  Finally, agencies should publish an annual report assessing their progress in implementing the Memo for their AI use cases.

Develop best practices for federal agencies overseeing public benefits programs.  The Memo is clear that public benefits are a rights-impacting use and therefore those use cases must meet the Memo’s risk mitigation requirements.  The missing issues in this respect are that the Memo’s notice and remedy requirements for rights-impacting AI do not meet the minimum constitutional due process requirements for public benefits decisions. In addition, most public benefits decisions are made at the state and local level. The Memo currently does not, but should, include guidance to federal agencies to construct best practices for the benefits programs they oversee.

Algorithmic systems in benefits can cause a number of harms, including by making determinations based on corrupt or discriminatory data, arriving at determinations with no explanation or without transparency.  Applying the risk management protections for benefits determinations will help to address these concerns.

Require agencies to consider the impact of AI on people with disabilities.  People with disabilities continue to face accessibility challenges in using AI systems, despite the Executive Order’s call for accessibility. Agencies need to intentionally include people with disabilities by building systems that conform to accessibility standards and be mindful that the disability community is a heterogenous group comprising multiple subpopulations. Agencies should also consider the impact that differences in language may have to ensure accessibility for the communities where AI systems are used. Including people with disabilities on AI teams is an effective harm prevention strategy as agencies identify the impact of AI on this diverse population.  Careful consideration must be given to disability inclusion as agencies respond to the new requirements from OMB and hire people with disabilities who understand disability rights and other civil rights laws.

Establish a process for ongoing and regular public engagement, including with civil society and civil rights organizations on agencies’ use of AI.  The public has the most to gain or lose from the use of AI.  It is critical that the public interest is respresented.  Agencies should be required to establish defined programs to proactively seek community input as they implement the Memo and in their ongoing operations that are covered by the Memo.

Innovate for good.  Agencies should ensure that AI is used to make progress in tackling societal challenges, such as accessibility, health disparities, food insecurity, equity, and justice. Likewise, how to innovate to advance equity needs more focus in order to take hold and become a reality.  It is critical that agencies seek community engagement as part of this process.  The Memo should include a specific mandate to achieve this outcome.

Further tools and practices.  It will be important to establish expectations—presumably created by NIST—for the standards-making and/or other processes that agencies will put into place.  Specifically, any standards or testing developed pursuant to the Memo, for example for auditing or risk assessments, should be evaluated against the tenets of the AI Bill of Rights and AI Executive Order, to ensure that the Memo furthers the goals of equity and inclusion.

Conclusion
We appreciate this opportunity to comment on the Memo, which sets out a practical and actionable approach for identifying, measuring, and mitigating harms before AI is put into use, as well as evaluating existing systems.  If properly implemented, the Memo could be a significant step toward achieving equitable innovation.

For the use of AI to be successful, agencies must ensure that the benefits and risks of AI are considered early on and throughout the AI lifecycle, through design, development, and deployment.  Before procuring or using an AI system, an agency should understand its limitations, recognize its intended uses as well as potential misuses, consider how to ensure the AI works for the people, and prevent harm.  Thank you for your consideration of our concerns and views. Please direct any questions about these comments to Koustubh “K.J.” Bagchi, vice president of the Center for Civil Rights & Technology at The Leadership Conference on Civil and Human Rights, at [email protected] or Frank Torres, civil rights and technology fellow at The Leadership Conference on Civil and Human Rights, at [email protected].

Sincerely,

The Leadership Conference on Civil and Human Rights
A. Philip Randolph Institute
Access Now
American Association of People with Disabilities
American Federation of Teachers
Americans for Democratic Action (ADA)
Asian Americans Advancing Justice – AAJC
Center for American Progress
Center for Democracy & Technology
Center for Law and Social Policy (CLASP)
Clearinghouse on Women’s Issues
Common Cause
Disability Rights Education and Defense Fund (DREDF)
Electronic Frontier Foundation
Fund for Leadership, Equity, Access and Diversity
Houston Immigration Legal Services Collaborative
Impact Fund
Japanese American Citizens League
League of Women Voters of the United States
National Association for Latino Community Asset Builders (NALCAB)
National Center for Learning Disabilities
National Center for Parent Leadership, Advocacy, and Community Empowerment (National PLACE)
National Coalition for Literacy
National Consumer Law Center (on behalf of its low-income clients)
National Disability Rights Network (NDRN)
National Employment Law Project
National Fair Housing Alliance
National Health Law Program
National Hispanic Media Coalition
National Organization for Women Foundation
National Partnership for Women & Families
National Urban League
National Women’s Law Center
NETWORK Lobby for Catholic Social Justice
Southern Echo Inc.
The Policing Project at NYU School of Law
The Trevor Project
UnidosUS
United Church of Christ Media Justice Ministry

[i] Letter from the Center for American Progress, The Leadership Conference on Civil and Human Rights, and The Center for Democracy and Technology to the White House, Aug. 4, 2023.

[ii] Valerie Schneider, Locked Out by Big Data: How Big Data, Algorithms and Machine Learning May Undermine Housing Justice, 52.1 Colum. Hum. Rts. L. Rev. 251, 254 (2020), https://blogs.law.columbia.edu/hrlr/files/2020/11/251_Schneider.pdf; Lauren Kirchner & Matthew Goldstein, Access Denied: Faulty Automated Background Checks Freeze Out Renters, The Markup & N.Y. Times (May 28, 2020), https://themarkup.org/locked-out/2020/05/28/access-denied-faulty-automated-background-checks-freeze-out-renters; Brief of the American Civil Liberties Union Foundation, The Lawyers’ Committee for Civil Rights Under Law, The National Fair Housing Alliance, and The Washington Lawyers’ Committee for Civil Rights and Urban Affairs, as Amici Curiae Supporting Appellant and Reversal, Opiotennione v. Bozzuto Mgmt. Co., No. 21-1919, (4th Cir. 2021), ECF No. 49-2, https://www.lawyerscommittee.org/wp-content/uploads/2022/08/3.-Opiotennione-v.-Bozzuto-Mgmt-Corp-amicus-brief.pdf.

[iii] Miranda Bogen & Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias, Upturn, (Dec. 2018), https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20–%20Help%20Wanted%20-%20An%20Exploration%‌20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf; Alina Köchling & Marius Claus Wehner, Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decisionmaking in the context of HR recruitment and HR development, 13 Bus. Res. 795 (2020), https://doi.org/10.1007/s40685-020-00134-w.

[iv] Julia Angwin et al., Minority Neighborhoods Pay Higher Car Insurance Premiums Than White Areas With the Same Risk, ProPublica (Apr. 5, 2017), https://www.propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-white-areas-same-risk; Maddy Varner & Aaron Sankin, Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders, The Markup (Feb. 25, 2020), https://themarkup.org/allstates-algorithm/2020/02/25/car-insurance-suckers-list.

[v] Robert Bartlett et al., Consumer-Lending Discrimination in the FinTech Era (Nat’l Bureau Econ. Res. Working Paper No. 25943, 2019), https://www.nber.org/papers/w25943; Bertrand K. Hassani, Societal Bias reinforcement through machine learning: a credit scoring perspective, 1 AI & Ethics 239 (2020), https://link.springer.com/article/10.1007/s43681-020-00026-z.

[vi] Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Sci. 447 (2019), https://www.science.org/doi/10.1126/science.aax2342; Trishan Panch et al., Artificial intelligence and algorithmic bias: implications for health systems, 9 J. Glob. Health (2019) (offering definitions of algorithmic bias in health systems), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6875681/; Natalia Norori et al., Addressing bias in big data and AI for health care: A call for open science, 2 Patterns (2021), https://doi.org/10.1016/j.patter.2021.100347.

[vii] Todd Feathers, Major Universities Are Using Race as a “High Impact Predictor” of Student Success, The Markup (Mar. 2, 2021), https://themarkup.org/machine-learning/2021/03/02/major-universities-are-using-race-as-a-high-impact-predictor-of-student-success; Maureen Guarcello et al., Discrimination in a Sea of Data: Exploring the Ethical Implications of Student Success Analytics, Educause Rev. (Aug. 24, 2021), https://er.educause.edu/articles/2021/8/discrimination-in-a-sea-of-data-exploring-the-ethical-implications-of-student-success-analytics.

[viii] Alex P. Miller & Kartik Hosanagar, How Targeted Ads and Dynamic Pricing Can Perpetuate Bias, Harv. Bus. Review (Nov. 8, 2019), https://hbr.org/2019/11/how-targeted-ads-and-dynamic-pricing-can-perpetuate-bias); Jennifer Valentino-DeVries et al., Websites Vary Prices, Deals Based on Users’ Information, Wall St. J. (Dec. 24, 2012), https://www.wsj.com/articles/SB10001424127887323777204578189391813881534.

[ix] Rashida Richardson et al., Litigating Algorithms 2019 Report: New Challenges to Government Use of Algorithmic Decision Systems, AI Now Inst., (Sept. 2019), https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf; Hadi Elzayn et al., Measuring and Mitigating Racial Disparities in Tax Audits 3–4 (Stan. Inst. for Econ. Pol’y Rsch., Working Paper, Jan. 30, 2023), https://dho.stanford.edu/wp-content/uploads/IRS_Disparities.pdf.

[x] Aaron Sankin et al., Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them, The Markup (Dec. 2, 2021), https://themarkup.org/prediction-bias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates-them; Todd Feathers, Gunshot-Detecting Tech is Summoning Armed Police to Black Neighborhoods, Vice: Motherboard (July 19, 2021), https://www.vice.com/en/article/88nd3z/gunshot-detecting-tech-is-summoning-armed-police-to-black-neighborhoods.

[xi] White House, Executive Order on Further Advancing Racial Equity and Support of Underserved Communities Through the Federal Government, 2023, [hereinafter E.O. 14091].

[xii] See, National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (2023).

[xiii] See, National Science and Technology Council, National Artificial Intelligence Research and Development Strategic Plan 2023 Update (2023).

[xiv] See, National Artificial Intelligence Research Resource Task Force, Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research Resource (2023).

[xv] E.O. 14091.

[xvi] Nation AI R&D at 12-13.

[xvii] Executive Office of the President, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Executive Order 13960, December 3, 2020; Office of Management and Budget, Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications, November 17, 2020,