The Leadership Conference Comments to NTIA on AI Accountability

View PDF of the comments here.

Re: NTIA Request for Comment on AI Accountability, Docket No. NTIA-2023-0005

Dear Assistant Secretary Davidson:

On behalf of The Leadership Conference on Civil and Human Rights (The Leadership Conference) and its Media/Telecommunications Task Force, we write in response to the National Telecommunications and Infrastructure Administration’s (NTIA) Notice of AI Accountability Policy Request for Comment.[i] The Leadership Conference, a coalition charged by its diverse membership of more than 230 national organizations to promote and protect the rights of all persons in the United States, and its Media/Telecommunications Task Force, work to ensure that civil and human rights, equal opportunity, and democratic participation are at the center of communication and technology policy debates.

We appreciate NTIA’s continued commitment to protecting the public in the age of rapidly emerging new technologies. As we wrote in our response to NTIA’s Request for Comments (RFC) on Privacy, Equity, and Civil Rights,[ii] we are encouraged by NTIA’s leadership in advancing the Biden administration’s goal of ensuring that the use of AI, and policies regulating the technology, are centered on equity and civil rights, including through the implementation of the AI Bill of Rights.[iii] These comments build upon those The Leadership Conference submitted in the Privacy, Equity, and Civil Rights RFC.[iv] We agree with NTIA’s and the administration’s view that for AI systems to be trustworthy, it is essential to “hold entities accountable for developing, using, and continuously improving the quality” of those systems to realize their benefits and reduce harm. Mechanisms for accountability include measuring AI system risks, for example, through testing, risk assessments, and audits of AI systems, including civil rights audits.[v] To meet the administration’s goal to “advance American values and leadership in AI” and its racial and gender equity agenda, any accountability framework that is adopted must prohibit bias and discrimination and promote equity and inclusion.

Hold big tech accountable and build safe and equitable AI.

As The Leadership Conference recently noted in Reflections on Civil Rights and Our AI Future,[vi] accountability is critical given the rapid creation and widespread deployment of AI. We recognize that technology now shapes nearly every aspect of modern life. We know that while technological progress can and should benefit everyone, many artificial intelligence tools can also carry tremendous risks for civil rights. Therefore, accountability frameworks must ensure that rather than entrenching bias and automating discrimination, technology should create opportunity, safety, and benefits for all.

We agree with NTIA — now is the time to act. We must shift our focus from principles to durable, measurable, enforceable, and robust safeguards for the use of AI.

As we wrote in our comments on Privacy, Equity, and Civil Rights:

There is a growing record of patterns and practices of data collection and use across sectors that harm individuals, particularly the most marginalized communities. The use of algorithms, fueled by an individual’s personal information both from data collected and inferred, has led to reproducing patterns of discrimination[vii] in recruiting,[viii] housing,[ix] education,[x] finance,[xi] mortgage lending,[xii] credit scoring,[xiii] health care,[xiv] vacation rentals,[xv] ridesharing,[xvi] and other services. Private companies are developing and offering technologies that use data in ways that can discriminate, or disproportionately harm communities of color, when they are inaccurate. Products and services such as facial recognition,[xvii] including in-store facial recognition,[xviii] cell phone location data tracking,[xix] background checks for employment,[xx] and credit scoring[xxi] have had harmful impacts on communities of color. Commercial data practices can facilitate the surveillance of and discrimination against communities of color, both by packaging and selling data to law enforcement in ways that allow them to circumvent warrant requirements and by selling biased technologies for law enforcement use.[xxii] One example is cell phone location data tracking. It has become common for law enforcement to rely on cell tower data in criminal prosecutions. Although a warrant is usually obtained, the data can be inaccurate, and the use of unreliable information from private companies in prosecutions can have grave consequences.[xxiii] With no legal requirements in place to assess how data is used, evaluate the potential impact of the AI system, test or audit it, there is nothing to prevent or stop adverse consequences until after harm has occurred.

Several recent statements from academics and companies engaged in AI highlight other concerns about AI, including making “mitigating the risk of extinction from AI” a priority.[xxiv] While that, and similar long-term issues may merit attention, addressing them should not distract from holding Big Tech accountable for real harms caused by the use of AI today.[xxv]

AI accountability is needed to ensure civil rights are protected.

Any accountability framework that is adopted must ensure AI and other automated decision-making technologies are centered on equity and civil rights and prevents bias and discrimination caused by the use of AI.

We urge NTIA to consider the following questions in assessing an accountability framework:

  • When the accountability framework is applied to a specific use case, are the risks identified, are measures taken to prevent harm, and is there transparency?
  • Are the data used minimized and protected?
  • Is the system’s fairness and equity (not biased or discriminatory) demonstrable through testing, including in the field?
  • Have impacted communities been considered and consulted and potential harms identified and prevented?
  • Are the capabilities and limitations of the system known and inappropriate use cases restricted?

Recently, several companies, like Open AI and Microsoft, have advocated for AI governance structures that include licensing and the creation of a new federal agency to oversee AI.[xxvi] There are currently scant details about such an approach or assurances that the licensing program will be robust and the new agency will be resourced, empowered, and have a broad scope to ensure real accountability is achieved. While those ideas may have merit, they will take time to develop and implement.

In the meantime, harms caused by AI, including bias and discrimination, are already happening today. Existing agencies are taking action to address these harms, including by working to implement the AI Bill of Rights, provide guidance, create rules, and use their authority to enforce civil rights laws. Those agencies, including NTIA, are seeking input from stakeholders. While the new approaches suggested are being debated, existing agencies should be fully resourced to address AI and, if needed, their authority expanded.

We also note that there is an increasing call from industry for AI regulations.[xxvii] We agree that civil rights protections related to the use of technology are too critical to leave to companies or the industry to self-govern. Laws and regulations are needed to keep individuals safe and ensure their rights are protected and, ultimately, to hold big tech accountable. At issue is whether the AI laws and regulations contemplated by industry will provide real protections and accountability. Industry’s lack of support for — and opposition to — state attempts to regulate AI raises questions about the companies’ support for meaningful AI rules.[xxviii] For example, AB 331 in California included some limits on the use of AI where it was biased or discriminatory. The legislation also called for AI risk assessments before the technology is put into use in certain circumstances. Despite limitations in its scope and enforceability, industry opposed the bill. Members of Congress have also signaled that they are considering legislation to govern AI. We hope that there is broad support from all stakeholders for federal rules that prohibit discrimination and provide concrete accountability measures, like mandating risk assessments be conducted prior to the use of AI systems in consequential areas.[xxix]

Other governments are actively working on rules to address the harms that can be caused by the creation and use of AI and ways to hold those developing and deploying AI accountable. Most notable is the ongoing work in the European Union (EU) to craft an AI law.[xxx] That law is intended to provide a viable framework to ensure that AI systems are safe and human rights are protected through measurable and enforceable mechanisms. Those include testing and assessments prior to and after an AI system is put into use. The work being done in the EU, with input from a broad range of stakeholders, may be instructive for creating an effective accountability framework in the United States.

The AI Bill of Rights provides an accountability baseline.

Nearly a decade ago, The Leadership Conference and other civil rights organizations recognized that the increasing reliance on technology, specifically AI, has not necessarily translated into oversight, accountability, and transparency. In 2014, working with those groups, we released “Civil Rights Principles for the Era of Big Data.” Those principles highlight the need to protect and strengthen key civil rights protections in the face of technological change. Soon after, the Obama administration’s Big Data and Privacy Working Group released their findings, which included principles and recommendations to protect privacy and prevent algorithmic discrimination.[xxxi] The Civil Rights Principles were updated in 2020 as part of continued work to ensure technology respects civil rights and advances the public interest.[xxxii]

The Biden administration quickly established civil rights and equity as central considerations for federal agencies, and the framing for AI accountability continued to take shape. The Leadership Conference and other civil rights organizations encouraged policymakers to ensure emerging technologies “protect civil rights, prevent unlawful discrimination, and advance equal opportunity.” Based on this prior work, including consulting with a broad spectrum of stakeholders, in October 2022 the White House published the Blueprint for an AI Bill of Rights,[xxxiii] setting forth principles “for the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”

We know what needs to be done to hold entities creating or using AI and other technologies accountable — implement and operationalize the AI Bill of Rights. The AI Bill of Rights rests on a solid foundation of work to develop policy related to data and emerging technologies and should serve as the cornerstone for holding Big Tech accountable.

Any accountability framework must address the potential harms of AI and result in fair and equitable outcomes.

The AI Bill of Rights outlines sound and well recognized guiding principles to mitigate harmful effects surrounding AI development and use of AI: (1) AI systems must be safe and effective, (2) protect against algorithmic discrimination, (3) ensure data privacy, (4) provide for notice and explanation, and (5) require human alternatives, consideration, and fallback.[xxxiv] The AI Bill of Rights echoes the now long-standing AI principles adopted by leading tech companies, business associations, governments, and advocacy organizations.[xxxv]

Safe and effective systems

An accountability framework must require assessments to identify risks and prevent or mitigate unsafe outcomes of AI use. AI systems should be “fit for purpose” and should not cause harm. AI systems are increasingly used in ways that impact individuals. There must be assurances that those use cases, like decision-making, are fair and equitable. If not, the use of those systems should be prohibited. Development of accountability measures should include consultation with diverse stakeholders, including those communities impacted by the use of technology. The systems should undergo testing, including operational testing, and proactive consideration of civil rights in their design. Independent evaluation of the system to corroborate safety and effectiveness is also vital to confirm that the accountability measures are being met.

We appreciate the drive for continued U.S. innovation and global competition. Adopting measures to ensure technology works safely and effectively are not inconsistent with those objectives. In fact, adoption of safety features has often sparked innovation and competition and led to public benefit. For example, despite early opposition by the industry to auto safety measures, like seatbelts and airbags, car companies today compete on their safety record.[xxxvi] And the public is much safer on the roads. Likewise, despite predictions that privacy rules being adopted by the EU, the GDPR, would stifle innovation and competition by restricting the use of data, U.S. technology and business has thrived.[xxxvii] Those data protection rules provide baseline privacy protections that citizens want.[xxxviii] And companies managed to adapt to the new regime. Similarly, civil rights protections should not be diminished or disregarded in the name of innovation or competition when it comes to AI.

Prohibit AI discrimination

An accountability framework for AI must prohibit discrimination. Protecting against bias and discrimination is a critical element of achieving the administration’s day-one equity goals. Those equity goals have formed the basis of ongoing administration work related to ensuring individuals’ rights are protected, big tech is held accountable, and agency actions to implement the AI Bill of Rights are taking place.

Prohibiting discrimination related to the development and deployment of AI was also included in the bipartisan American Data Privacy and Protection Act (ADPPA). Achieving that goal may be complex in that there might not be a one-size-fits all approach. Different use cases may warrant different accountability mechanisms, like testing and assessments, that are appropriate to a specific deployment. That is why the overarching goal of ensuring fairness and equity must be applied broadly across use cases and industry sectors. This prohibition is consistent with existing civil rights laws.

Data privacy

An accountability framework must address the collection and use of information about people. That information drives AI systems, and any framework would be lacking if it did not include mechanisms to protect people from misuse of their information and limit the incentives for companies to build surveillance systems as inputs to AI models. As we noted in our comments on Privacy, Equity, and Civil Rights, “privacy rights are civil rights.” Needed privacy protections include minimizing what is collected; clear permissible and impermissible purposes for collecting, sharing, and using information about people; and prohibitions on the discriminatory collection and use of personal information. It is also important to assess the data used to train AI systems for risk of bias and discrimination.

Notice and explanation

An accountability framework must address transparency. Individuals are entitled to notice when an AI system is being used in consequential ways and provided with an understandable explanation of how the system works for that use. The “Civil Rights Principles for the Era of Big Data” outline this important aspect of an accountability framework, stating, “Governments and corporations must provide people with clear, concise, and easily accessible information on what data they collect and how it is used.” Transparency also means ensuring ongoing and robust community engagement and consideration of multilingual needs.

However, there is often a lack of transparency when AI systems are used to make consequential decisions, including decisions related to the justice system.[xxxix] Individuals must be informed where a system is used and provided useful and understandable explanations about how the decision was derived. That information must enable an individual to gauge the fairness of that decision and decide whether to seek recourse. If the user of an AI system cannot explain how the system informs or makes a decision, then that system should not be used.[xl]

Human alternatives, consideration, and fallback

As the AI Bill of Rights notes, individuals “should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy the problems that are encountered.” This is an important aspect of accountability where an AI system is used to make consequential decisions in areas like employment and hiring, housing, health care, credit and lending, and education. Given the complexity of AI systems, and the potential for error, where an individual’s economic opportunity or well-being are at risk because of an AI-driven decision, the accountability framework must provide for the ability to seek human intervention and recourse where an AI system is deployed.

The accountability framework should be informed by existing and ongoing efforts to ensure responsible AI.

As AI adoption and use across sectors became more ubiquitous, there was growing recognition of both the potential benefits and the potential risks. As a result, government, industry, and advocacy efforts have focused on how to identify and address those risks. That ongoing work can be instructive and serve as a roadmap for the AI accountability framework being contemplated by the NTIA.

NIST Risk Management Framework[xli]

The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, released in January 2023, provides an initial framework that entities developing and deploying technology can begin to use to manage the risks associated with artificial intelligence. Developed through a consensus-driven approach, the AI Framework is voluntary. Its goal is to improve a company’s incorporation of trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

The Risk Management Framework is an important first step, but more work needs to be done to account for specific and sector specific use cases. The ultimate proof of the viability and durability of the framework rests on its measurable success in identifying, mitigating, and preventing the risks to civl rights posed by the use of AI systems.

Civil Rights Audits[xlii]

A viable and durable accountability framework must holistically consider how well, or how badly, an entity is managing critical civil rights concerns. Civil rights audits are a key way to surface such concerns. An October 2021 report, “The Rationale for and Key Elements of a Business Civil Rights Audit,” lays out a set of principles to define effective and meaningful civil rights audits.

National Fair Housing Alliance Purpose, Process, and Monitoring — A New Framework for Auditing Algorithmic Bias in Housing & Lending[xliii]

In February 2022, the National Fair Housing Alliance (NFHA) released a new structure of auditing algorithmic systems called “Purpose, Process, and Monitoring” that captures the life cycle of a model — pre-development, development, and post-development, including monitoring. This framework provides an approach for evaluating internal controls and mitigating risks that may be inherent in algorithmic systems and harm consumers. NFHA’s model for assessment may provide a useful template for other sectors where AI systems are used.

Agency enforcement[xliv]

Any credible accountability framework must be enforceable. All agencies should ensure that AI does not result in discriminatory outcomes. Moreover, the accountability framework must reflect the applicability of existing laws. The recent joint statement from the Department of Justice, the Federal Trade Commission, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission — announcing their commitment to using existing civil rights and consumer protection laws to protect individuals where AI systems are used — is an important step in this direction. [1]

Conclusion

As discussed in these comments, companies should not be allowed to use an AI system that may have a consequential impact until first being able to demonstrate its safety and show that it is fair and equitable and not biased or discriminatory. This requires companies to understand, through measures like assessments and audits, how an algorithm works before relying on it to make decisions.

Thank you for your consideration of our recommendations on AI accountability. We look forward to working with NTIA on this issue and others of importance to our country. If you have any questions about this letter, please contact Media/Telecommunications Task Force Co-Chair Cheryl Leanza, United Church of Christ Media Justice Ministry, at [email protected], or Jonathan Walter, media/tech policy counsel at The Leadership Conference, at [email protected].

Sincerely,

The Leadership Conference on Civil and Human Rights
American Civil Liberties Union
Asian Americans Advancing Justice | AAJC
Communications Workers of America (CWA)
National Consumer Law Center (on behalf of its low-income clients)
National Fair Housing Alliance
National Urban League
United Church of Christ Media Justice Ministry

[i] National Telecommunications and Infrastructure Administration Request for Comment on AI Accountability, April 13, 2023, https://www.federalregister.gov/documents/2023/04/13/2023-07776/ai-accountability-policy-request-for-comment

[ii] National Telecommunications and Information Administration, Department of Commerce, 88 FR 3714, January 20, 2023.

[iii] White House, Blueprint for an AI Bill of Rights, Oct. 4, 2022, https://www.whitehouse.gov/ostp/ai-bill-of-rights/

[iv]Comments of The Leadership Conference on Civil and Human Rights, March 6, 20023, NTIA-Comments-Privacy-Equity-Civil-Rights.pdf (civilrightsdocs.info)

[v] “Making the Case for Business Civil Rights Audits,” The Leadership Confernce on Civil and Human Rights, October 27, 2021, https://civilrights.org/blog/civil-rights-audit-report/

[vi] Reflections on Civil Rights and Our AI Future – The Leadership Conference on Civil and Human Rights, April 18, 2023

[vii] Milner, Yeshimabeit. & Traub, Amy. ”Data Capitalism and Algorithmic Racism.” Data for Black Lives. Demos. May 17, 2021.

https://www.demos.org/sites/default/files/2021-05/Demos_%20D4BL_Data_Capitalism_Algorithmic_Racism.pdf

[viii] Bogen, Miranda. & Rieke, Aaron. ”Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias.” Upturn. December 10, 2018

https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20–%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf

[ix] Schneider, Valerie. ”Locked Out By Big Data: How Big Data, Algorithms and Machine Learning may Undermine Housing Justice.” Columbia Human Rights Law Review. Fall, 2020.

https://blogs.law.columbia.edu/hrlr/files/2020/11/251_Schneider.pdf

[x] Quay-de la Vallee, Hannah. & Duarte, Natasha. ”Algorithmic Systems in Education: Incorporating Equity and Fairness When Using Student Data.” Center for Democracy & Technology. August 12, 2019.

https://cdt.org/insights/algorithmic-systems-in-education-incorporating-equity-and-fairness-when-using-student-data/

[xi] Bartlett, Robert et al. ”Consumer-Lending Discrimination in the FinTech Era.” UC Berkeley. November, 2019.

https://faculty.haas.berkeley.edu/morse/research/papers/discrim.pdf

[xii] Olick, Diana. ”A troubling tale of a Black man trying to refinance his mortgage.” CNBC. Aug 19, 2020.

https://www.cnbc.com/2020/08/19/lenders-deny-mortgages-for-blacks-at-a-rate-80percent-higher-than-whites.html

[xiii] Rice, Lisa. & Swesnik, Deidre. ”Discriminatory Effects of Credit Scoring on Communities of Color.” Suffolk University Law Review. January 17, 2014.

https://cpb-us-e1.wpmucdn.com/sites.suffolk.edu/dist/3/1172/files/2014/01/Rice-Swesnik_Lead.pdf

[xiv] Obermeyer, Ziad et al. ”Dissecting racial bias in an algorithm used to manage the health of populations.” Science.org. Oct 25, 2019.

https://www.science.org/doi/10.1126/science.aax2342

[xv] Edelman, Benjamin et al. ”Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment.” American Economic Association. April, 2017.

https://www.aeaweb.org/articles?id=10.1257/app.20160213

[xvi] Ge, Yanbo et al. ”Racial and Gender Discrimination in Transportation Network Companies.” NATIONAL BUREAU OF ECONOMIC RESEARCH. October, 2016.

https://www.nber.org/system/files/working_papers/w22776/w22776.pdf

[xvii] Hill, Kashmir. ”The Secretive Company That Might End Privacy as We Know It,” The New York Times. Jan 18, 2020.

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

[xviii] Wimbley, Randy. and Komer, David. ” Black teen kicked out of skating rink after facial recognition camera misidentified her.” FOX 2 Detroit. July 16, 2021.

https://www.fox2detroit.com/news/teen-kicked-out-of-skating-rink-after-facial-recognition-camera-misidentified-her

[xix] Valentino-DeVries, Jennifer et al. ”Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret.” The New York Times. Dec 10, 2018.

https://www.nytimes.com/interactive/2018/12/10/business/location-data-privacy-apps.html

[xx] Henderson v. Source for Public Data, 540 F. Supp. 3d 539 (E.D. Va. 2021). May 19, 2021

https://casetext.com/case/henderson-v-source-for-public-data

[xxi] Rice, Lisa. & Swesnik, Deidre. ”Discriminatory Effects of Credit Scoring on Communities of Color.” Suffolk University Law Review. January 17, 2014.

https://cpb-us-e1.wpmucdn.com/sites.suffolk.edu/dist/3/1172/files/2014/01/Rice-Swesnik_Lead.pdf

[xxii] https://www.naacpldf.org/wp-content/uploads/LDF-Comment-on-FTC-Rulemaking-on-Commercial-Surveillance-Data-Security27.pdf.

[xxiii] https://washingtonpost.com/local/experts-say-law-enforcements-use-of-cellphone-records-can-be-inaccurate/2014/06/27/028be93c-faf3-11e3-932c-0a55b81f48ce_story.html.

[xxiv] “Statement on AI Risk,” Center for AI Safety, https://www.safe.ai/statement-on-ai-risk#open-letter

[xxv] “Artificial Intelligence Could Lead to Extinction, Experts Warn,” BBC News, May 20, 2023, https://www.bbc.com/news/uk-65746524

[xxvi] How do we best govern AI? – Microsoft On the Issues, Brad Smith, May 25, 2023; “Why AI Panic is Not About Safety,” Grant Gallagher, Newsweek, June 5, 2023, https://www.newsweek.com/why-ai-panic-not-about-safety-opinion-1804140

[xxvii] Brad Smith, Microsoft president, says he believes A.I. regulation is coming (msn.com), CBS News, May 28, 2023

[xxviii] California Assembly Bill 331, Automated Decision Tools, Introduced, January 30, 2023, 202320240AB331_Assembly Privacy And Consumer Protection.pdf

[xxix] “Congress wants for regulated AI, buit it has a lot of chatching up to do,” NPR, May 15, 2023 To regulate artificial intelligence, Congress has a lot to learn : NPR

[xxx] AI Act: a step closer to the first rules on Artificial Intelligence | News | European Parliament (europa.eu)

[xxxi] big_data_privacy_report_may_1_2014.pdf (archives.gov)

[xxxii] “Civil Rights Principles in the Era of Big Data” Civil Rights Privacy and Technology Table, Principles | Civil Rights Table

[xxxiii] Blueprint for an AI Bill of Rights | OSTP | The White House, October, 2023

[xxxv] Principled Artificial Intelligence | Berkman Klein Center (harvard.edu); Artificial Intelligence and the Various Principles of AI | by Alex Moltzau | DataSeries | Medium

[xxxvi] 50 Years Ago, ‘Unsafe at Any Speed’ Shook the Auto World – The New York Times (nytimes.com), November 26, 2015.

[xxxvii] “The New Rules on Data Privacy,” Harvard Business Review, February 25, 2022, The New Rules of Data Privacy (hbr.org)

[xxxviii] Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information | Pew Research Center, November 15, 2019

[xxxix] AI in the Criminal Justice System – EPIC – Electronic Privacy Information Center

[xl] Regulators Take Aim at AI to Protect Consumers and Workers (usnews.com), U.S. News and World Report, May 26,2023

[xli] AI Risk Management Framework | NIST

[xlii] “Making the Case for Business Civil Rights Audits,” The Leadership Conference on Civil and Human Rights, October 27, 2021, civilrightsdocs.info/pdf/reports/Civil-Rights-Audit-Report-2021.pdf

[xliii] “Purpose, Process, and Monitoring Framework (PPM),” The National Fair Housing Alliance, February 2022, Purpose, Process, and Monitoring Framework (PPM) – NFHA (nationalfairhousing.org)

[xliv] EEOC-CRT-FTC-CFPB-AI-Joint-Statement(final).pdf