Letter for the Record: Hearing on “Does Section 230’s Sweeping Immunity Enable Big Tech’s Bad Behavior?”
October 27, 2020
Senator Roger Wicker
Committee on Commerce, Science, and Transportation
Washington, DC 20510
Senator Maria Cantwell
Committee on Commerce, Science, and Transportation
U. S. Senate
Washington, DC 20510
Dear Chairman Wicker and Ranking Member Cantwell,
On behalf of The Leadership Conference on Civil and Human Rights (“The Leadership Conference”), a coalition charged by its diverse membership of more than 220 national organizations to promote and protect the rights of all persons in the United States, we thank you for the opportunity to submit our views regarding the need for major tech companies to address threats to civil rights created or facilitated by their platforms and improve civil rights infrastructure, and ask that this statement be entered into the record of the committee hearing entitled “Does Section 230’s Sweeping Immunity Enable Big Tech’s Bad Behavior?” on October 28, 2020.
The internet has created immense positive value by connecting people, facilitating civil rights advocacy and adding new voices to our culture and public debate. However, it can also enable discriminatory conduct, exacerbate existing disparities, and give new tools to those who want to threaten, harass, intimidate, defame, or violently attack people different from themselves. While The Leadership Conference welcomes scrutiny of the role of social media companies in our democracy, we urge caution regarding potential changes to Section 230 to ensure any proposed changes will not do more harm than good. We encourage the committee to focus on the most important opportunities to ensure these platforms serve all people, which we discuss in more detail below.
Technological progress should promote equity and justice as it enhances safety, economic opportunity and convenience for everyone. On October 21, The Leadership Conference joined dozens of leading civil rights and technology advocacy organizations in releasing updated Civil Rights Principles for the Era of Big Data, in response to the current risks to civil rights – including COVID-19, a surge in hate-based violence, private sector and government surveillance, and disinformation on social media platforms designed to manipulate or suppress voter participation – and with an eye toward how technology can meet its promise and affirmatively promote justice and equity. These principles provide important guidelines to aid this committee in ensuring that new technologies—including algorithmic decision making, artificial intelligence and machine learning— protect civil rights, prevent unlawful discrimination, and advance equal opportunity.
Congress should use this opportunity to ask platforms what actions they are taking or plan to take to reduce online activities that harm communities of color, religious minorities, and other marginalized communities. For years, we have urged major tech platforms to take responsibility for ensuring that their products and business processes protect civil and human rights and do not result in harm or bias against historically marginalized groups, but they have failed to take sufficient action. And despite years of advocacy urging the companies to rectify the problems, misinformation regarding time, place, manner, and qualifications to vote and content intended to suppress or deter people from voting continue to proliferate. The failure of tech platforms to address these activities harms people of color and members of other marginalized communities. Moreover, despite new policies that ostensibly forbid white supremacy, white supremacists continue to use platforms to incite racist violence on multiple platforms against Asian Americans, African Americans, Jews, Muslims, people with disabilities, and members of the LGBTQ community. Platforms have the tools and the ability to respond effectively to these concerns if they only had the will. Congress should press tech companies on the actions they are taking to improve and enforce their own policies and stop the weaponization of their platforms to suppress the vote, spread hate, and undermine our democracy.
Congress should not be distracted by baseless claims of “anti-conservative” bias and should instead focus on platforms’ efforts to respond to online voter suppression and other threats to our democracy. A commitment to civil and human rights is not a ‘right’ or ‘left’ issue – it is about right versus wrong. Baseless allegations of so-called anti-conservative bias should not distract tech companies. Research shows that anti-conservative bias is a phantom problem; a number of studies, articles, and reports[i] show that the voices of marginalized communities are more likely to be regarded as “toxic” by content moderators and content moderation artificial intelligence.
Congress should instead focus on some of the more significant challenges facing social media platforms such as safeguarding our elections and the census from manipulation and disinformation, as well as fighting hate and harassment online. We have made a series of recommendations to obviate false, misleading, and harmful content on the companies’ platforms that could lead to voter suppression and the spread of hate speech. While Google, Facebook, Twitter, and other social media platforms have made some recent policy changes, their lack of consistent enforcement makes these policies insufficient to prevent the spread of voter suppression. Platforms must also better utilize disinformation tools for voter suppression content as they have done for other issues like COVID-19; and must prevent disinformation in political ads.[ii]
Section 230 must be considered carefully and in context. President Trump’s actions to use Section 230 to pressure social media companies is a threat to our civil liberties. The President’s Executive Order and the FCC’s recent announcement defy both statutory and constitutional principles in the name of protecting the president’s own speech online regardless of the consequences to everyone else. And many of the current legislative proposals around Section 230 would do more harm than good. One such example, the EARN IT Act, threatens to not only exacerbate the censorship that many LGBTQ persons face online, but also threatens the welfare and safety of the sex worker community. Instead of looking at simply changing Section 230 as a means of platform regulation, Congress should clearly define the problem and carefully consider whether Section 230 has a role in causing or exacerbating the problem before turning to making changes to Section 230 as part of the solution.
Congress should press tech companies to conduct independent civil rights audits as well as improve their civil rights infrastructure. Structural changes within the platforms will also help better protect civil rights by ensuring platforms can hold themselves accountable to their commitment to civil rights, diversity, and inclusion. Among the companies appearing at the committee hearing, only Facebook has undertaken a civil rights audit with outside auditors, though civil rights groups have urged all the major platforms to do so. Congress must press the other tech companies to conduct credible independent civil right audits. But Facebook’s example demonstrates that without institutional commitment and outside pressure, the impact of an audit will be limited and short-lived.
That is why, in addition to pushing for civil rights audits, Congress must also urge tech companies to adopt structural reforms that comply with federal civil rights law and demonstrate that the companies understand that civil rights are not a partisan issue, but instead are fundamental to protecting the constitutional rights of all people and thus should be part of the organic structure and operations of these companies. This means that tech companies must hire staff with civil rights expertise in senior leadership. The civil rights infrastructure within the companies must be well-resourced and empowered within the company and consulted on the companies’ major decisions. New and clarified policies should be subject to vetting and review by internal teams with real civil rights expertise and experience, prior to their implementation. Finally, tech companies should provide a process and format through which civil rights advocates and the public can engage with the companies and monitor their progress.
Congress must also press tech companies to do more to address meaningful diversity and inclusion at their workplaces and the lack of people of color in senior executive, engineering, and technical positions. People of color who are working at the companies often face discrimination and unequal pay, as well as a culture where they are devalued. Tech companies must ensure that this does not happen in their workplaces and must address the inequities that may have already occurred. They also must expand strategies to attract and retain talent in diverse communities to expand access to jobs and opportunities.
Prevention of harm, not damage and after-the-fact repair, must be the goal. This goal cannot be fully accomplished if those with civil rights expertise are not part of decision-making processes. Congress must continue to review and scrutinize tech companies to make sure that they are taking the necessary steps to accomplish this goal.
Congress should consider other meaningful ways to protect civil and human rights. Congress should also focus on other means to protect civil and human rights. For example, invasive data collection and use practices can lead to civil rights violations. Congress should pass comprehensive federal consumer privacy legislation that protects consumers by requiring companies to minimize the data they collect, define permissible and impermissible purposes for collecting, sharing, and using personal data, prohibit discriminatory uses of personal data, and provide for algorithmic transparency and fairness in automated decisions. Congress should ensure federal agencies are focusing on identifying and ending data processing and algorithmic practices that discriminate on the basis of protected characteristics with respect to access to credit, housing, education, public accommodations and elsewhere.
Thank you for the consideration of our views. If you have any questions about the issues raised in this letter, please contact Leadership Conference Media/Telecommunications Task Force Co-Chairs Cheryl Leanza, United Church of Christ, Office of Communication, Inc., at [email protected] and Kate Ruane, American Civil Liberties Union, [email protected]; or Corrine Yu, Leadership Conference Senior Program Director, at [email protected].
President and CEO
Executive Vice President for Government Affairs
[i] See Casey Newton, Leaving content moderation to volunteers is empowering racists, The Verge (June 9, 2020) https://www.theverge.com/interface/2020/6/9/21283442/content-moderation-racism-facebook-reddit-nextdoor-karen; Bertram Lee, Moderating Race on Platforms, Public Knowledge (January 29, 2020) https://www.publicknowledge.org/blog/moderating-race-on-platforms/; Thomas Davidson et al., Racial Bias in Hate Speech and Abusive Language Detection Datasets, Cornell University arXiv:1905.12516v1 (May 29, 2019) https://arxiv.org/abs/1905.12516v1; Maarten Sap et al., The Risk of Racial Bias in Hate Speech Detection, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 1668–1678 (2019) https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf; Shirin Ghaffary, The algorithms that detect hate speech online are biased against black people, Vox: Recode (August 15, 2019) https://www.vox.com/recode/2019/8/15/20806384/social-media-hate-speech-bias-black-african-american-facebook-twitter.
[ii] Leadership Conference Urges Social Media Platforms to Address Online Voter Suppression, The Leadership Conference on Civil and Human Rights (October 13, 2020) https://civilrights.org/resource/leadership-conference-urges-social-media-platforms-to-address-online-voter-suppression/