Letter for the Record – Hearing “Online Platforms and Market Power, Part 6: Examining the Dominance of Amazon, Apple, Facebook, and Google”

View this letter as a PDF here. 

August 4, 2020

The Honorable David N. Cicilline
Chair
Subcommittee on Antitrust, Commercial, and Administrative Law
House Committee on the Judiciary
2138 Rayburn House Office Building
Washington, DC 20515

The Honorable F. James Sensenbrenner
Ranking Member
Subcommittee on Antitrust, Commercial, and Administrative Law
House Committee on the Judiciary
2138 Rayburn House Office Building
Washington, DC 20515

Dear Chairman Cicilline and Ranking Member Sensenbrenner,

On behalf of The Leadership Conference on Civil and Human Rights (“The Leadership Conference”), a coalition charged by its diverse membership of more than 220 national organizations to promote and protect the rights of all persons in the United States, we thank you for the opportunity to submit our views regarding the need for social media platforms to address online voter suppression and improve civil rights infrastructure and ask that this statement to be entered into the record of the Subcommittee hearing entitled “Online Platforms and Market Power, Part 6: Examining the Dominance of Amazon, Apple, Facebook, and Google,” which was held on Wednesday, July 29, 2020.

Ensuring a fair, inclusive, and accessible election demands extreme vigilance. With less than 100 days until the November election, Congress must use its convening and oversight power to ensure that tech platforms and companies do everything they can to meaningfully address and counter online voter suppression while protecting civil rights. For the last several years, The Leadership Conference has engaged with social media platforms to encourage them to address these issues. We have made a series of recommendations to obviate false, misleading, and harmful content that could lead to voter suppression on the companies’ platforms. While Facebook and other social media platforms have made some policy changes, their uneven enforcement makes them insufficient to prevent the spread of voter suppression. Additional background and details about our work and proposals are contained in a statement for the record that The Leadership Conference submitted for the House Energy and Commerce joint hearing on June 24, 2020, “A Country in Crisis: How Disinformation Online is Dividing the Nation.”[1]

Facebook in particular has come under scrutiny for its practices. As stated in the final report of the Facebook Civil Rights Audit that was released on July 8, there is a deficit in the lack of understanding and application of civil rights at the company.[2] When Facebook and the few other companies who dominate and control social media do not address content that leads to voter suppression and attacks civil rights, the result is a corrosive effect on our democracy. As stated in the final Facebook civil rights audit report released on July 8:

If politicians are free to mislead people about official voting methods (by labeling ballots illegal or making other misleading statements that go unchecked, for example) and are allowed to use not-so-subtle dog whistles with impunity to incite violence against groups advocating for racial justice, this does not bode well for the hostile voting environment that can be facilitated by Facebook in the United States. We are concerned that politicians, and any other user for that matter, will capitalize on the policy gaps made apparent by the president’s posts and target particular communities to suppress the votes of groups based on their race or other characteristics. With only months left before a major election, this is deeply troublesome as misinformation, sowing racial division and calls for violence near elections can do great damage to our democracy.[3]

Given the market power that tech companies currently wield, Congress must urge social media platforms and tech companies to (1) improve and enforce their current community standards policies on voter engagement/interference and civic activities to prevent voter suppression; (2) use COVID-19 policy and procedures as a model for how positive steps can be taken to both provide accurate information and prevent disinformation for voting and elections; (3) prevent disinformation in political ads; and (4) improve civil rights infrastructure.

Platforms Must Enforce and Improve Their Current Policies

The Antitrust, Commercial, and Administrative Law Subcommittee is appropriately using its oversight authority to review the market power of the tech companies. The subcommittee can also push the platform companies to enforce and improve their current community standards policies on voter engagement and civic activities. The platforms must implement their policies fairly, swiftly removing voter suppression content that violates those policies; downranking the content in search results; or labeling content and including correct voting information, regardless of the speaker.

While Facebook has policies to remove content that has false or misleading information on how to vote/participate in the election process[4], the company is not consistently enforcing them. For example, Facebook made an intentional decision to take no action to address a post by President Trump on May 26 that contained false information about mail-in ballots in Michigan. Facebook has likewise decided to take no action on similar posts by the president with false information about mail-in ballots in Nevada and California.

Facebook must consistently enforce its current policies on election integrity and voter interference and not “pick and choose” when it will enforce its standards. As stated in the third civil rights audit report:

Facebook’s voter interference policy prohibits false misrepresentations regarding the “methods for voting or voter registration” and “what information and/or materials must be provided in order to vote.” The ballots and ballot applications issued in Nevada and Michigan were officially issued and are current, lawful forms of voter registration and participation in those states. In California, ballots are not being issued to “anyone living in the state, no matter who they are.” In fact, in order to obtain a mail-in ballot in California one has to register to vote.

Facebook decided that none of the posts violated its policies. Facebook read the Michigan and Nevada posts to be accusations by President Trump that state officials had acted illegally, and that content challenging the legality of officials is allowed under Facebook’s policy. Facebook deemed the California post to be non-violating of its provision for “misrepresentation of methods for voter registration”.[5]

In addition, the report stated:

To the civil rights community, there was no question that these posts fell squarely within the prohibitions of Facebook’s voter interference policy. Facebook’s constrained reading of its policies was both astounding and deeply troubling for the precedents it seemed to set.[6]

The subcommittee and Congress must urge the platforms to take a wider range of affirmative steps to address this content, including removal, labeling, and/or fact-checking of inaccurate and misleading posts. Without doing so, the content on platforms can directly lead to voter suppression and adversely affect the integrity of elections.

Platforms Must Apply Disinformation Prevention Tools to Voting
Platforms must improve processes and work on solutions that prevent the posting and spread of content in a myriad of ways – from user accounts, ads, organic posts, open and closed groups – which could suppress or manipulate voting rights towards African Americans and other targeted communities.

Facebook’s proactive response to the COVID-19 pandemic models the range of actions that can be taken to effectively combat voter suppression content. Recognizing public responsibility to protect and keep its users safe, the company has worked to push out reliable and factual information about the virus and how to stay safe during its spread. Facebook also takes affirmative steps to inform users that they have interacted with harmful information or myths about COVID-19.[7] Google/YouTube and Twitter have also instituted polices to either label or prevent content that contains misinformation about COVID-19.[8]

Despite these efforts, disinformation about COVID-19 is unfortunately still being spread rapidly. As reported this week, a video with false claims about COVID-19 was posted on Facebook, YouTube, and Twitter. The video received 14 million views on Facebook and was previously shared 600,000 times and was viewed 40,000 times on YouTube. President Trump and Donald Trump, Jr. shared the video on their Twitter account.[9] All three platforms eventually took the videos down. While this response was far from perfect because immediate action was not taken before the damage was done, it does show that the platforms have the ability to address false information. In our ongoing discussions with Facebook, we have asked the company to utilize the same tools and resources to remove voter suppression content and proactively disseminate truthful information from trusted sources on the ways that voters can cast a ballot safely this year. If Facebook and the other platforms can address false information about COVID-19, they should and must do the same to address voter suppression.

At the same time, efforts to exploit fears surrounding COVID-19 can lead to the spread of disinformation and voter suppression content that can adversely affect and prevent citizens from voting, particularly communities of color, who are disproportionately impacted by COVID-19 and historically faced and continue to face barriers to the ballot box. Unless the public has accurate information making them aware of the range of options through which to request and submit completed ballots, many voters – particularly people of color, Native Americans, people with disabilities, limited-English proficient citizens, students, and other historically marginalized citizens – simply will not have equal access to the ballot box, and the promise of our democracy will not be fulfilled.

Facebook is taking some steps to address voter education and provide authenticated voter information with the creation of its Voter Information Center. In addition, they have made policy changes prohibiting false claims about polling place conditions in the 72 hours before Election Day, and limiting exemptions for politicians’ exemptions newsworthiness content.[10] Facebook states that the Center will provide accurate and authentic information about how to vote, including information and deadlines about mail-in ballots, early voting, and in-person voting.[11] If implemented properly, providing factual information on Facebook platforms about voting is a positive step and is an important request that we have asked for in our discussions with Facebook.

However, these efforts, while well-intentioned, do not address the larger issue of the spread of underlying content and disinformation that leads to voter suppression on the platforms, which adversely affects and prevents the ability of citizens to vote, particularly for communities of color. It is also unclear how and whether these changes will be enforced, especially since Facebook does not fully implement its current voter interference polices. Moreover, Facebook has indicated that it will attach a link to the Voting Information Center for posts that discuss voting. While this can be useful, a blanket labeling system that treats all information equally is not the preferred approach. Labeling should be used to help the user determine when they are viewing misleading posts, which means links or labels should only be attached to posts that contain disinformation and misinformation about voting or elections.

Facebook has shown us, through its efforts to provide accurate information on COVID-19, that it is equipped to monitor and prevent misinformation. Facebook, Google, and the other platforms must make similar efforts to ensure that their platforms are not used to spread inaccurate and misleading information that suppresses voting rights and manipulates voters. We urge the subcommittee to press Facebook and Google to detail solutions that they plan to take to fix these issues before the election.

Platforms Must Prevent Disinformation in Political Ads
Disinformation that is included in political ads can often make its way onto platforms as organic content that is spread and leads to voter suppression. Twitter recently announced a prohibition of political ads,[12] while Google/YouTube limits the targeting of ads to age, gender, and general location.[13] Contextual targeted advertising, such as serving ads to people reading or watching a story about a specific issue like the economy, is still allowed by Google.[14] Google states that it will not allow political advertisers to make false claims[15] and has also stated that it will remove election-related content on YouTube that may pose a risk of serious harm.[16] Facebook, however, has not made substantive changes to its political ads policy. Microtargeting based on a large number of characteristics, interests, and demographics are still allowed, and there is scant language in Facebook’s policy about addressing false claims in political ads.[17] In addition, the concentration of political ads appearing on so few political platforms, particularly on Facebook, can lead to market power concerns.

Facebook’s hands-off policy also means that content in political ads is largely unchecked. In addition, despite Google’s ban of false claims in political ads and removal of certain types of harmful content, it is unclear how widely it enforces this policy or what types of ads are removed, which can allow false information to still spread widely on YouTube.[18]

Facebook announced in June that it is giving users more control over the political ads that users can see[19], and is expanding its ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity, or immigration status are a threat to the physical safety, health, or survival of others.[20] There are also reports that Facebook is considering a temporary ban on political ads for a short time leading up to the November election.[21] But giving users more choice over what ads they can see or banning ads for a limited amount of time does little to address the underlying subject matter and disinformation that is often prevalent in ads, and leads to voter suppression. Moreover, the prohibition of claims in ads that people from a specific race, ethnicity, and other backgrounds are a threat to safety are already covered in Facebook’s community standards. We request that the subcommittee and Congress urge Facebook and other platforms to take affirmative steps to address disinformation in political ads, including:

  • Ensuring that any advertisements containing previously verified falsehoods by their standards can be prevented from running again instead of forcing each ad to be individually examined
  • Preclearing political ads in the 72 hours before the general election to avoid ones with disinformation from going viral before they are detected and taken down

Improving Civil Rights Infrastructure
Structural changes within the platforms will also help better protect voting and other civil rights by ensuring platforms can hold themselves accountable to their commitment to civil rights, diversity and inclusion. Among the companies appearing at the subcommittee hearing, only Facebook has undertaken a civil rights audit with outside auditors, though civil rights groups have urged all the major platforms to do so. Congress must urge the other tech companies to conduct credible independent civil right audits. But even in Facebook’s case, without the institutional changes we have been urging the company to make, the audit’s impact will be limited and short-lived.

The audit has been an incredible tool for ensuring that Facebook continues to examine its impact on the civil rights of all people in the United States, to identify in real time critical upcoming moments where Facebook’s policies and enforcement require modification, and to provide crucial internal and external benchmarks for examining progress. However, without a clear structural commitment from Facebook, the benefits of two years of work on the civil rights audit are unlikely to last beyond its conclusion. And while Facebook’s oversight board – if properly staffed and structured – has the potential to be of assistance to Facebook’s content moderation decision-making, it cannot substitute for staff internal to the organization whose sole objective is protection and promotion of civil rights.

Congress must urge Facebook, Google, Amazon, Apple, and other tech companies to adopt structural reforms that comply with federal civil rights law and demonstrate that the companies understand that civil rights are not a partisan issue, but instead are fundamental to protecting the constitutional rights of all people and thus should be part of the organic structure and operations of these companies. This means that tech companies must hire staff with civil rights expertise in senior leadership. The civil rights infrastructure within the companies must be well-resourced and empowered within the company and consulted on the companies’ major decisions. New and clarified policies should be subject to vetting and review by internal teams with real civil rights expertise and experience, prior to their implementation. Finally, tech companies should provide a process and format through which civil rights advocates and the public can engage with the companies and monitor their progress.

Congress must also press tech companies to do more to address meaningful diversity and inclusion at their workplaces and the lack of people of color in senior executive, engineering, and technical positions. People of color who are working at the companies often face discrimination and unequal pay, as well as a culture where they are devalued. Tech companies must ensure that this does not happen in their workplaces and must address the inequities that may have already occurred. They also must expand strategies to attract and retain talent in diverse communities to expand access to jobs and opportunities.

Prevention of harm, not damage and after-the-fact repair, must be the goal. This goal cannot be fully accomplished if those with civil rights expertise are not part of the decision-making processes. Congress must continue to review and scrutinize tech companies to make sure that they are taking the necessary steps to accomplish this goal.

Conclusion
The threat of online voter suppression to the integrity of our democracy and the need for civil rights infrastructure, especially when content is concentrated in a limited number of platforms, cannot be understated. The threat to safe and fair elections and civil rights can have a corrosive effect to the fabric of our country. After largely ignoring these issues, social media platforms and tech companies have taken some small steps in recent months to address the problems. But far more needs to be done and the companies need to be more engaged on solutions to address the proliferation of false, misleading, and harmful content. As the election approaches, it is critical to fix these issues as soon as possible. The Leadership Conference urges Congress to press the companies to institute the reforms outlined in this letter. We also stand ready to work with Congress and elected officials to find solutions that will keep our democracy safe, limit the concentration of content, and stop the suppressive effect that disinformation is having on civil rights and racial justice. Should you require further information or have any questions regarding this issue, please contact David Toomey at [email protected].

 

Sincerely,

Vanita Gupta                                        LaShawn Warren
President and CEO                             Executive Vice President for Government Affairs

CC: The Honorable Jerrold Nadler, Chair, House Committee on the Judiciary
The Honorable Jim Jordan, Ranking Member, House Committee on the Judiciary

[1]http://civilrightsdocs.info/pdf/policy/letters/2020/Leadership_Conference_Statement_6.24.20_House_EC_Hearing.pdf

[2] https://about.fb.com/wp-content/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf, p.10

[3] Ibid.

[4] https://about.fb.com/news/2019/10/update-on-election-integrity-efforts/#voter-suppression

[5] https://about.fb.com/wp-content/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf, p. 38

[6] Ibid.

[7] https://about.fb.com/news/2020/07/coronavirus/

[8] https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html; https://support.google.com/youtube/answer/9891785

[9] https://www.cnn.com/2020/07/28/tech/facebook-youtube-coronavirus/index.html

[10] https://www.facebook.com/zuck

[11] https://about.fb.com/news/2020/06/voting-information-center/

[12] https://business.twitter.com/en/help/ads-policies/prohibited-content-policies/political-content.html

[13] https://www.blog.google/technology/ads/update-our-political-ads-policy/

[14] Ibid

[15] Ibid.

[16] https://youtube.googleblog.com/2020/02/how-youtube-supports-elections.html

[17] https://www.facebook.com/policies/ads/restricted_content/political

[18] https://www.nytimes.com/2020/02/03/technology/youtube-misinformation-election.html

[19] https://about.fb.com/news/2020/06/voting-information-center/

[20] https://www.facebook.com/zuck

[21] https://www.washingtonpost.com/technology/2020/07/10/facebook-ads-politics-ban/