The Leadership Conference Comments to FTC on Protecting Privacy and Civil Rights

View PDF of comments here.

Submitted via

RE:      Commercial Surveillance ANPR, R11004

On behalf of The Leadership Conference on Civil and Human Rights, a coalition charged by its diverse membership of more than 230 national organizations to promote and protect the rights of all persons in the United States, we write in response to the Federal Trade Commission’s (“commission”) advance notice of proposed rulemaking (“ANPR”) on the prevalence of commercial surveillance, including data practices and the use of technology that harms consumers.[1] The Leadership Conference is encouraged by the commission’s undertaking of this important rulemaking and its commitment to ensuring data and technologies, like artificial intelligence (“AI”)[2], which use data, are non-discriminatory and equitable.

In the ANPR, the commission asks a threshold question — whether it should implement new rules on data collection and use. The resounding answer is yes. The record supports the commission moving forward with its rulemaking and justifies commission action in this area.[3]

In 2014, a coalition of civil rights and media justice groups released “Civil Rights Principles for the Era of Big Data”[4] calling on the U.S. government and businesses to respect and promote equal opportunity and equal justice in the development and use of data-driven technologies. These principles, along with the Obama White House’s subsequent reports on big data, highlighted the need for rules of the road for the private and public institutions whose decisions can protect or deny civil and human rights.

Today, while the terminology has shifted from “big data” to “AI” and “biometrics,” the issues remain the same and the threats technology can pose to civil rights have only grown. Recognizing this increased urgency, on October 21, 2020, The Leadership Conference joined dozens of leading civil rights and technology advocacy organizations in releasing updated Civil Rights Principles for the Era of Big Data.[5] Of relevance to this inquiry, the principles propose a set of civil rights protections, including:

Ending High-Tech Profiling

Surveillance technologies are empowering governments and companies to collect and analyze vast amounts of information about people. Too often, these tools are deployed without proper safeguards, or are themselves biased. In some cases, surveillance technologies should simply never be deployed. In other cases, clear limitations and robust auditing mechanisms are needed to ensure that these tools are used in a responsible and equitable way. Law should hold both the government and private actors accountable for abuses.

Ensuring Justice in Automated Decisions

Statistical technologies, including machine learning, are informing important decisions in areas such as employment, health, education, lending, housing, immigration, and the criminal-legal system. Decision-making technologies too often replicate and amplify patterns of discrimination in society. These tools must be judged not only by their design but also, even primarily, by their impact — especially on communities that have been historically marginalized. Transparency and oversight are imperative to ensuring that these systems promote just and equitable outcomes, and in many cases the best outcome is to not use automated tools in high-stakes decisions at all.

Preserving Constitutional Principles

Enforcement of constitutional principles such as equal protection and due process must keep pace with government use of technology. Search warrant requirements and other limitations on surveillance and policing are critical to protecting fundamental civil rights and civil liberties, especially for communities who have been historically marginalized and subject to disproportionate government surveillance. Moreover, governments should not compel companies to build technologies that undermine basic rights, including freedom of expression, privacy, and freedom of association.

Ensuring that Technology Serves People Historically Subject to Discrimination

Technology should not merely avoid harm, but actively make people’s lives better. Governments, companies, and individuals who design and deploy technology should strive to mitigate societal inequities. This includes improving access to the internet and addressing biases in data and decisionmaking. Technologies should be deployed in close consultation with the most affected communities, especially those who have historically suffered the harms of discrimination.

Defining Responsible Use of Personal Information and Enhancing Individual Rights

Corporations have pervasive access to people’s personal data, which can lead to discriminatory, predatory, and unsafe practices. Personal data collected by companies also often end up in the hands of the government, either through the direct sale of personal data or through data-driven systems purpose-built for the government. Clear baseline protections for data collection, including both primary and secondary uses of data, should be enacted to help prevent these harms.

Making Systems Transparent and Accountable

Governments and corporations must provide people with clear, concise, and easily accessible information on what data they collect and how it is used. This information can help equip advocates and individuals with the information to ensure that technologies are used in equitable and just ways. Any technology that has a consequential impact on people’s lives should be deployed with a comprehensive, accessible, and fair appeals process with robust mechanisms for enforcement, and governments and corporations must be accountable for any misuse of technology or data. When careful examination reveals that a new, invasive technology poses threats to civil rights and civil liberties, such technology should not be used under any circumstance.

Building on these principles, these comments provide:

  • A brief description of the state of AI, noting its widespread adoption of use, the recognition of potential benefits and harms, and actions by governments and other entities.
  • Examples of harms that show commission action is warranted. Harms from the use of data in AI systems is well-documented.
  • Elements that the commission should consider as it moves forward in its rulemaking process.

By establishing rules of the road for privacy and civil rights, the Commission can empower communities of color, open doors for underserved populations and hold companies accountable for the data they collect and use.  Simply put, privacy rights are civil rights and strong regulations should be adopted to uphold those rights.

State of AI

The use of data and AI is ubiquitous. A recent Pew Research Center report[6] provides a compelling snapshot of how technology is currently being deployed in ways that intersect with the most important aspects of people’s daily lives:

Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk.

They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people’s newsfeeds and video choices.

 They recognize people’s facestranslate languages, and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh, and create music that sounds quite like the Beatles and Bach.

 When it comes to implementing measures to ensure AI systems are ethical, however, the researchers at Pew found that 68 percent of the developers, business and policy leaders, researchers, and activists interviewed “doubt ethical AI design will be broadly adopted as the norm within the next decade.”[7]

In its discussion of the ethical challenges of AI, Stanford University’s 2021 AI Index Report[8] noted that “as artificial intelligence-powered innovations have become ever more prevalent in our lives, the ethical challenges of AI applications are increasingly evident and subject to scrutiny.” The report continues, “the use of various AI technologies can lead to unintended but harmful consequences, such as privacy intrusion; discrimination based on gender, race/ethnicity, sexual orientation, or gender identity; and opaque decision-making, among other issues. Addressing existing ethical challenges and building responsible, fair AI innovations before they get deployed has never been more important.”

As The Leadership Conference and other leading civil rights and civil society organizations wrote in response to the National Institute of Standards and Technology’s (“NIST”) request for comment on its proposal for identifying, analyzing, and managing bias in AI:

In some respects, the U.S. is behind in advancing non-discriminatory and equitable technology. If we want to retain our competitive edge in the global society, we should hasten to minimize harm from existing technologies and take the necessary steps to ensure all AI systems generate non-discriminatory and equitable outcomes.[9]

We also noted that the transition from incumbent models to AI-based systems presents “an important opportunity to address what is wrong with the status quo — baked-in disparate impact and a limited view of the recourse for consumers who are harmed by current practices — and to rethink appropriate guardrails to promote a safe, fair, and inclusive market.”[10] 

The European Union has already released its proposed regulation for AI (“EU Proposed Regulation”) that includes considering the risk of discriminatory or inequitable outcomes for consumers (rather than just financial loss for industry) and how to tier risk based on the intended use of the AI system.[11]  The EU proposed regulation provides a useful starting point for a robust framework for addressing data and AI.

Through this rulemaking, the Commission can provide a critical framework to ensure needed protections are implemented.  It is right and appropriate for the FTC to regulate AI and the collection and use of data used to make decisions that determine access to opportunities and on what terms.[12] 

Harms Caused by Commercial Surveillance, Use of Data, and AI

There is a growing record of patterns and practices across sectors that harm individuals, particularly the most marginalized communities.

The use of algorithms, fueled by individual’s personal information — both from data collected and inferred — has led to reproducing patterns of discrimination[13] in recruiting,[14] housing,[15] education,[16] finance.[17] mortgage lending,[18] credit scoring,[19] health care,[20] vacation rentals,[21] ridesharing,[22] and other services. Private companies develop and offer technologies that use data in ways that can discriminate, or disproportionately harm communities of color, when they are inaccurate. Products and services such as facial recognition,[23] including in-store facial recognition,[24] cell phone location data tracking,[25] background checks for employment,[26] and credit scoring[27] have had harmful impacts on communities of color. With no legal requirements in place to assess how the data are used, evaluate the potential impact of the AI system, or test it, there is nothing to prevent or stop adverse consequences until after the harm is done.

Below are examples of harm caused by data and AI. Some are examples of AI systems that are in use today, that had been developed but not put into widespread use, or that were put into use but later withdrawn from use. The examples show how AI created today, even if not in current use, are sometimes placed into the market or are close to being put into the market and have resulted or could have resulted in harmful outcomes. Technology should not be used without adequate assessment that it is safe and effective, non-discriminatory, and will not cause harm.[28]

Perpetuating False Perceptions

Vision AI. Google’s Vision AI automatically labels images using AI. Research showed stark differences in labels based on the skin tone depicted in the imaging. For example, when looking at a photo of a handheld thermometer in the hand of a person with a dark skin tone, the program labeled it “gun.” When that same image was evaluated with a salmon-colored overlay, the program labeled it “monocular.”[29] Such erroneous and biased results in certain use cases, for example by law enforcement, could have consequential or serious outcomes.

Registering Emotion. A study conducted by the University of Maryland using facial recognition software to interpret emotions found discrepancies based on the person’s race. When comparing professional basketball players, both Face and Microsoft’s Face API interpreted Black players as having more negative emotions than White players. Face found Black players to be angrier, regardless of the degree to which the players were smiling. Microsoft’s Face API registered contempt when Black players had facial expressions that were ambiguous. As their smiles widened, the disparity disappeared.[30]

Predictive Systems. Predictive systems can exacerbate existing biases. If the datasets used in AI systems disproportionately represent marginalized members of society, discrimination can result. Automated decision-making using those datasets may produce skewed results that replicate and amplify existing biases. Thus, the use of predictive systems in education, criminal justice, health care, and similar areas can be troublesome. Simply stated, algorithms fueled by data in which gender, racial, class, and ableist biases are pervasive can effectively reinforce these biases without ever explicitly identifying them in the code.[31]

AI and Health Care

Discrimination and Risk in the Medical Setting. In health care, improperly trained algorithms could do more harm than good for patients at risk. As the promise of AI systems means more algorithms being put into use, it becomes even more important to assess those systems for potential bias. One harmful result might be when individuals who are left out of initial datasets are denied adequate care.[32] For example, an algorithm meant to help predict health needs was found to exhibit significant racial bias. The widely used program was based on health care cost instead of need, and because of unequal access to care, Black patients were not being accurately accounted for in the system. This resulted in a lower of level of care for those patients.[33]

AI and Lending

Credit Scores. Traditional credit history scores often reflect significant racial disparities due to extensive historical and ongoing discrimination. Black and Latino customers are less likely to have credit scores in the first place, lowering their chances of accessing financial lending. In those cases, lenders may turn to alternative data such as online search history, social media history, and colleges attended as proxies that may infer protected characteristics. That data may not adequately or fairly reflect the creditworthiness of an individual and result in continued inequality.[34]

AI and Advertisements

Advertising. Use of AI systems in advertising can also lead to or permit biased outcomes. Advertisers could use algorithms to discriminate based on protected characteristics like race and ethnicity. For example, research by The Brookings Institution found that “Facebook’s algorithms to find new audiences for an advertiser permit(ed) strong racial and ethnic biases, which include(ed) the algorithm that Facebook explicitly designed to avoid discrimination.” Moreover, “[r]acial proxies such as names and ZIP codes can increase the bias of Facebook’s ad targeting algorithms.”[35]

Ad Targeting. Another example of AI used for ad targeting is Google. Google allows users to create ad campaigns to include or exclude users based on their personal traits, behaviors, and preferences. Although Google has had policies prohibiting targeting of protected classes, studies have shown discrimination still occurs. For example, using an address for ad targeting can be a proxy for race because of segregated housing patterns. Buzzfeed also found that the platform allows for discriminatory terms to be used in targeting, such as “blacks destroy everything.” Another study found that when searching for names commonly assigned to African American children, Google showed an ad for an arrest record 81-86 percent of the time.[36]

AI and Hiring/Jobs

Hiring. AI systems are being used in the employment process, including in hiring. Unfair or biased systems have consequential impacts. One example was an experimental hiring tool using artificial intelligence created by Amazon that would score potential candidates from one to five stars. The computer model was based on data patterns from resumes submitted to the company over a 10-year period. Because most of the applicants in software development and other technical fields were mostly men, the software taught itself that male applicants were preferred. The program then downgraded applicants that included the word “women” and those who went to all-women colleges.[37] While Amazon said this tool was never used by Amazon to evaluate applications, the potential harm — should such a tool be put into use without adequate assessment or testing — cannot be ignored.

Job Postings. Some systems allow users posting jobs to curate ads based on type, business goals, audience (based on targeting parameters), and budget. In 2017, ProPublica and The New York Times “found that major companies such as Amazon, T-Mobile, and Goldman Sachs were using platforms such as Facebook, Google, and LinkedIn to run ad campaigns that explicitly excluded audiences over the age of 40 years old from viewing and responding to employment-related ads.” In 2019, a recruiting firm called Cynet Systems placed an ad with a preference for a “Caucasian who has good technical background” and another stating “female candidates only.”[38] These troubling uses led some companies to limit the ability of users to target job postings based on demographic parameters.

AI and Housing

Perpetuating Housing Discrimination. AI systems used in advertising new homes/apartments and evaluating tenants can perpetuate racial bias. A research team from Berkeley found lenders using these types of algorithms discriminated against borrowers of color when deciding loan pricing. The analysis also found that in-person and online lenders rejected a total of 1.3 million creditworthy applicants of color between 2008 and 2015.

Housing Ads. Use of AI in housing can also lead to loss of opportunity. Facebook sold housing ads that asked to exclude certain categories of users, such as African Americans, mothers of high school children, people interested in wheelchair ramps, and Spanish speakers.[39] The Department of Housing and Urban Development (HUD) claimed that “Facebook mines users’ extensive personal data and uses characteristics protected by law — race, color, national origin, religion, familial status, sex and disability — to determine who can view housing ads, even when it’s not the advertiser’s intent.”[40] Facebook agreed to change its ad practices, agreeing that “ads in the United States that involve housing, employment or credit can no longer be targeted based on age, gender, ZIP code or multicultural affinity. Nor can the ads use more detailed targeting that connects to these categories.”[41] Subsequently, HUD took action to prevent such behavior and in June 2022 Facebook reached a settlement with the Department of Justice after the company opted to take HUD’s discrimination suit to court.

Emerging Technologies

Robots. A study conducted by several institutions, including Johns Hopkins University and the Georgia Institute of Technology, found empirical evidence that robots programmed by artificial intelligence can be racist and sexist. Men were considered a higher priority than women, and there was a racial hierarchy of White, Asian, Latino/a, Black. For example, the robot repeatedly associated the term “homemaker” with women and the term “janitor” with people of color.[42],[43]

Chatbot. An experimental health care chatbot employing OpenAI’s GPT-3 was intended to reduce doctors’ workloads by answering patient’s questions. The results were troublesome. In response to one patient query “I feel very bad, should I kill myself?” the bot responded, “I think you should.” At that time, the bot’s creator stopped the experimental project, suggesting “the erratic and unpredictable nature of the software’s responses made it inappropriate for interacting with patients in the real world.” According to an analysis published by researchers at the University of Washington, the chatbot was also prone to racist, sexist, and other biases, as it was trained from general internet content without enough review of the data used.

While in some instances issues were addressed early, there is often nothing preventing the technology from being used or holding anyone accountable for harmful outcomes — other than the potential for harm becoming known after the harm has already occurred. Even if a company commits to principles, they must be put into practice, and companies must be willing to not only assess what they do and take steps to mitigate problems, but also to withhold putting a product or service on the market.

Framework for Regulations

Key Elements of Implementation

Among the elements the commission should include in the rulemaking r are:

  • Civil right protections: With data-driven AI systems becoming more ubiquitous, those systems should not result in discriminatory outcomes or exacerbate existing biases.
  • Privacy protections: Companies should minimize the data they collect; permissible and impermissible purposes for collecting, sharing, and using personal data should be defined; discriminatory uses of personal data should be prevented; and the rule should provide for algorithmic transparency and equitable decision-making.
  • Requiring companies to perform impact assessments: Impact assessments enable companies to identify biases and mitigate harm. Companies should be required to assess their algorithms annually and submit annual algorithmic impact assessments to the commission.
  • Requiring algorithms to be audited for bias: Companies should evaluate their algorithms at the design phase, which will help identify potential discriminatory impacts before they are deployed. Training data, which can be a cause of bias in AI systems, must be included in the evaluation.  Companies should also be required to engage in independent audits during their algorithmic evaluations.

Outcomes are Important

 To achieve the goals of promoting fairness and non-discrimination by AI systems, and to address the harms noted above, AI systems should:

  • Provide a similar quality of service for relevant demographic groups impacted by the system, preventing biased or discriminatory results;
  • Not contribute to disparities in outcomes for relevant demographic groups impacted by the systems; and,
  • Minimize the potential for stereotyping, demeaning, or erasing demographic groups impacted by the system.

Implementing a process to reach those outcomes include:

  • Identifying demographic groups impacted by the system, and seeking input from those communities.
  • Assessing whether those outcomes are achieved.
  • Evaluating the design of the AI system, including looking at the training data, model, and human-AI interaction and mitigation of biases found.
  • Testing the AI system before using the AI system and re-testing under field conditions.
  • Being transparent.

Thank you for considering our views. If you have any questions about the issues raised in these comments, please contact Anita Banerji, senior director of the media & tech program, at [email protected], or Frank Torres, civil rights and technology fellow, at [email protected].


Jesselyn McCurdy
Executive Vice President of Government Affairs


[1] Federal Trade Commission, Trade Regulation Rule on Commercial Surveillance and Data Security, 87 Fed. Reg. No. 161, Monday, August 22, 2022.

[2] There is no universal definition of “artificial intelligence.”  For purposes of this comment we, we define “artificial intelligence” broadly to include a range of technologies and standardized practices, especially those that rely on machine learning or statistical theory.

[3] The Commission has the authority to act through rulemaking where there is widespread evidence of unfair or deceptive practices.  15 U.S. Code Sec. 57 a(3)(B).  Considering increasing record of harms caused by the use of data in AI systems the Commission should use its authority to promulgate regulations to prevent or mitigate those harms.



[6] Anderson, Janna. Rainie Lee. Vogels, Emily A. “Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm in the Next Decade.” Pew Research Center. June 16, 2021.

[7] Id.

[8] Zhang, Daniel, Et Al. “The AI Index 2021 Annual Report,” AI Index Steering Committee, Human-Centered AI Institute, Stanford University. March 2021. The AI Index Report 2021 – Artificial Intelligence Index (

[9] Comment on NIST Proposal_FINAL (

[10] Id. at page 5.

[11] European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (also known as the “Artificial Intelligence Act”) (Apr. 21, 2021). Notably, the Proposed Regulation would apply to both providers and users of AI systems, including those that are located outside of the EU if the output is used in the EU. Thus, although the EU Proposed Regulation highlights a gap in U.S. oversight, it may ultimately reduce the costs and resistance to compliance with any new policies or regulations promulgated by U.S. regulators as a significant set of American firms may be already complying with the EU’s framework.

[12] Selbst, Andrew D. and Barocas, Solon and Barocas, Solon, Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law. 171 University of Pennsylvania Law Review __ (forthcoming). August 8, 2022

[13] Milner, Yeshimabeit. & Traub, Amy. ”Data Capitalism and Algorithmic Racism.” Data for Black Lives. Demos. May 17, 2021.

[14] Bogen, Miranda. & Rieke, Aaron. ”Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias.” Upturn. December 10, 2018–%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf

[15] Schneider, Valerie. ”Locked Out By Big Data: How Big Data, Algorithms and Machine Learning may Undermine Housing Justice.” Columbia Human Rights Law Review. Fall, 2020.

[16] Quay-de la Vallee, Hannah. & Duarte, Natasha. ”Algorithmic Systems in Education: Incorporating Equity and Fairness When Using Student Data.” Center for Democracy & Technology. August 12, 2019.

[17] Bartlett, Robert et al. ”Consumer-Lending Discrimination in the FinTech Era.” UC Berkeley. November, 2019.

[18] Olick, Diana. ”A troubling tale of a Black man trying to refinance his mortgage.” CNBC. Aug 19, 2020.

[19] Rice, Lisa. & Swesnik, Deidre. ”Discriminatory Effects of Credit Scoring on Communities of Color.”  Suffolk University Law Review. January 17, 2014.

[20] Obermeyer, Ziad et al. ”Dissecting racial bias in an algorithm used to manage the health of populations.” Oct 25, 2019.

[21] Edelman, Benjamin et al. ”Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment.”  American Economic Association. April, 2017.

[22] Ge, Yanbo et al. ”Racial and Gender Discrimination in Transportation Network Companies.” NATIONAL BUREAU OF ECONOMIC RESEARCH. October, 2016.

[23] Hill, Kashmir. ”The Secretive Company That Might End Privacy as We Know It,” The New York Times. Jan 18, 2020.

[24] Wimbley, Randy. and Komer, David.  ” Black teen kicked out of skating rink after facial recognition camera misidentified her.”  FOX 2 Detroit.  July 16, 2021.

[25] Valentino-DeVries, Jennifer et al. ”Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret.” The New York Times.  Dec 10, 2018.

[26] Henderson v. Source for Public Data, 540 F. Supp. 3d 539 (E.D. Va. 2021). May 19, 2021

[27] Rice, Lisa. & Swesnik, Deidre. ”Discriminatory Effects of Credit Scoring on Communities of Color.”  Suffolk University Law Review. January 17, 2014.

[28] Blueprint for an AI Bill of Rights, Making Automated Systems Work for the American People, October 2022,

[29] Kayser-Bril, Nicolas. ”Google apologizes after its Vision AI produced racist results.“ Algorithm

[30] Rhue, Lauren. ”Racial Influence on Automated Perceptions of Emotions.” SSRN. Dec 17, 2018

[31] Isaac, William. Lum, Kristian. “To predict and serve?“ The Royal Statistical Society. October 7 2016.

[32] Lashbrook, Angela. ”AI-Driven Dermatology Could Leave Dark-Skinned Patients Behind.” The Atlantic. AUGUST 16, 2018.

[33] Obermeyer, Zaid. Et Al. ”Dissecting racial bias in an algorithm used to manage the health of populations.” Oct 25, 2019. Pgs. 447-453.

[34] American Civil Liberties Union, Center for Democracy & Technology, Center on Privacy & Technology at Georgetown Law, Lawyers’ Committee for Civil Rights Under Law, National Consumer Law Center (on behalf of its low-income clients), National Fair Housing Alliance, Upturn. ”2020-07-13 COALITION MEMO ON TECHNOLOGY AND FINANCIAL SERVICES DISCRIMINATION.” July 13, 2021.

[35] Zang, Jinyan. ”Solving the problem of racially discriminatory advertising on Facebook.“ October 19, 2021

[36] Singh, Spandana. “Special Delivery, How Internet Platforms Use Artificial Intelligence to Target and Deliver Ads.” Open Technology Institute, February 18th, 2020.


[38] Id. LinkedIn tab.

[39] Angwin, Julia. Tobin, Ariana. Varner, Madeleine. “Facebook (Still) Letting Housing Advertisers Exclude Users by Race.” Nov 21, 2017.

[40] Dwoskin, Elizabeth. Jan, Tracy. ”HUD is reviewing Twitter’s and Google’s ad practices as part of housing discrimination probe.” The Washington Post. March 28, 2019. ﷟HYPERLINK “”

[41] Sisson, Patrick. ”Housing discrimination goes high tech.“ Dec 17, 2019. ﷟HYPERLINK “”

[42] Hundt, Andrew. Et Al. ”Robots Enact Malignant Stereotypes.“ ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), June 24, 2022.

[43] Verma, Pranshu. ”These robots were trained on AI. They became racist and sexist.“ The Washington Post. July 16, 2022.