Testimony of Maya Wiley Before the House Select Subcommittee on the Weaponization of the Federal Government

View a PDF of this testimony here.

Chair Jordan, Ranking Member Plaskett, and members of the subcommittee: Thank you for the opportunity to testify today. My name is Maya Wiley, and I am the president and CEO of The Leadership Conference on Civil and Human Rights, a diverse coalition of more than 240 national organizations working to build an America as good as its ideals. Our coalition is dedicated to promoting and protecting the civil and human rights of every person in the United States. We are nonpartisan and work to advance and protect voting rights and access to affordable and quality health care. We fight hate, bias, and misinformation and disinformation, working with both policymakers and the private sector.

The issues before the committee today are among the most momentous facing our democracy. The ability of every person to have access to accurate and reliable information is a cornerstone of our democracy. So, too, are our First Amendment rights. These fundamental rights are at grave risk today — but not because government scientists, researchers, advocates, the White House, or ordinary citizens are sounding alarm bells about the destructive and dangerous dissemination of false and debunked conspiracy theories that abound on social media platforms. The risk we face to our democracy is that private social media companies are not transparent about or accountable for enforcing their policies to ensure public trust and safety.

Foreign governments, crime rings, organizations, and malicious individuals have actively created and disseminated baseless conspiracy theories about the integrity of our elections. They have spread dangerous, debunked, and factually unsupported claims about one of the deadliest pandemics in our history. This toxic content has caused immense harm — including, in some instances, acts of violence.

Let’s be clear: This content violates the policies written and created by social media companies themselves. For example, Instagram describes its content moderation policy to reassure users in this way: “In May of this year, we began working with third-party fact-checkers in the U.S. to help identify, review, and label false information. These partners independently assess false information to help us catch it and reduce its distribution.” Other platforms have similar policies. For example, YouTube prohibits “[c]ontent that contradicts local health authorities’ or WHO’s guidance on vaccine safety, efficacy, and ingredients.”

Nonetheless, users of these platforms — including witnesses before this subcommittee — have routinely violated these policies. With disturbing disregard for the facts, Mr. Kennedy claimed on Instagram that the COVID-19 vaccines are “the deadliest vaccine ever made.” That was false. Mr. Kennedy falsely claimed that “people with African blood react differently to vaccines than people with caucasian blood, they’re much more sensitive.” In September 2020, at the height of the pandemic, he falsely stated that clinical trials showed that the flu shot is 2.4 times more deadly than COVID-19. He falsely claimed that mRNA vaccines could “permanently alter [people’s] DNA.” He falsely claimed that the flu vaccine is “more deadly than Covid-19.” He falsely claimed that Bill Gates supports vaccines because he wants to “chip us,” promoting the conspiracy theory that vaccines are used to install microchips in humans to track them and control their behavior. He has falsely claimed that “Wi-Fi radiation opens up your blood-brain barrier, so all these toxins that are in your body can now go into your brain.” Just last week, Mr. Kennedy falsely suggested that Covid-19 could be a “bioweapon” that was “targeted to attack Caucasians and black people” and “[t]he people who are most immune are Ashkenazi Jews and Chinese.”

The consequences of social media companies failing to enforce their own policies for the trust and safety of their users can be devastating for all of us and, too often, particularly for people of color and women of all races. Take the pandemic, which was devastating for all communities. Black people and Native Americans died from COVID-19 at three to four times the rate of white Americans, and Latinos died at twice the rate of white Americans. There are many reasons for these awful statistics, including the legacy of decades of disparities in access to adequate healthcare. And to that end, public health officials worked hard to dispel the vaccine hesitancy that plagued those communities. Mr. Kennedy’s racist conspiracy theories directly undermined those essential efforts.

The real question before the subcommittee is this: Why do you want to make it more difficult for Instagram to keep anyone’s racist, antisemitic, and scientifically baseless conspiracy theories off their platform? The cesspool of bigotry and hate that infects social media platforms has fueled a sharp rise in violence motivated by race, gender, sexual orientation, gender identity, and religion. One need only look at each mass shooter who was inspired by the last mass shooter’s online manifesto to see the bloody consequences.

Despite the policies these platforms claim to have, we have tracked a systematic failure to enforce them. Despite these platforms’ immense profits and user bases — nearly 3 billion users for Facebook, more than 2.5 billion users for YouTube, and nearly 1.5 billion users for Instagram — public reporting shows that the platforms are cutting staff whose job it is to help protect the public from dangerous content. A major social media platform simply cannot responsibly apply its policies with only one person responsible for political misinformation and two for medical misinformation. YouTube shed two of its five policy experts who worked on hate speech and harassment issues, leaving only one person in charge of misinformation worldwide. Twitter now has only about 1,300 employees, an 80 percent drop from the 7,500 from before Elon Musk’s takeover.  Twitter has completely eliminated the team that oversaw disinformation and trust and safety issues. Sadly, Twitter is not alone. Meta cut 11,000 jobs recently, including deep cuts to trust and safety teams.

The argument that officials in the Biden administration unconstitutionally censored Mr. Kennedy, or anyone else, by discussing misinformation with social media platforms and flagging content to assist those platforms in applying their own policies, is legally absurd. Every member of this subcommittee knows — or should know under their oath to uphold the Constitution — that the First Amendment applies to governmental restrictions of speech, not private companies like Twitter, Facebook, or YouTube. The Biden administration did not ban Mr. Kennedy from Instagram; Instagram did. Instagram did so because Mr. Kennedy violated its policies.

The plaintiffs in Missouri v. Biden tried to evade that fundamental constitutional requirement by claiming that administration officials coerced social media platforms into censoring disfavored speech. That contention is similarly absurd. The government may “advocate and defend its own policies.” Board of Regents of the Univ. of Wis. Sys. v. Southworth, 529 U.S. 217, 229 (2000). Accordingly, “government officials do not violate the First Amendment when they request that a private intermediary not carry a third party’s speech so long as the officials do not threaten adverse consequences if the intermediary refuses to comply.” O’Handley v. Weber, 62 F.4th 1145, 1158 (9th Cir. 2023). The Biden administration communicated with social media platforms to flag illegal and harmful content, including lies about the 2020 election and conspiracy theories about COVID-19 vaccines. Those communications fall well within the government’s constitutionally permissible role of advocating for responsible corporate conduct and assisting platforms in implementing their own content moderation policies.

The Supreme Court has made clear that officials “can be held responsible for a private decision only when it has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that” of those officials. Blum v. Yaretsky, 457 U.S. 991, 1004 (1982) (emphasis added). For example, the Court held that a state agency violated the First Amendment by threatening booksellers with criminal prosecution unless they removed certain books from their shelves because the booksellers’ compliance was not “voluntary.” Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 68 (1963). As the Court explained, “[p]eople do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around.” Id.

In stark contrast, here the White House simply flagged objectionable content and offered its recommendations about how to handle the content that violated the companies very own policies. In fact, there is no factually supported allegation that the White House threatened the companies; the most concerning allegation made against White House officials was itself baseless misinformation. Social media platforms remained free to decide how to moderate the content on their platforms and there was no allegation of threats of any legal action. And social media platforms frequently exercised that freedom by declining to take down content flagged by Biden administration officials. Those platforms’ “choice[s]” directly opposed to the suggestions offered by officials could hardly “in law be deemed” to be those of the government.

Judge Doughty’s order in Missouri v. Biden granted a stunningly sweeping injunction whose analysis, in the words of the Department of Justice, “reflects an insupportably broad view of what interactions can make the government responsible for private parties’ actions.” Unsurprisingly, the Fifth Circuit immediately and summarily blocked the injunction and fast-tracked the government’s appeal. Thankfully, the appellate court is poised to overrule the district court’s profound legal errors before they can take effect.

The baseless claims of censorship defy common sense as well as the law. If the White House is so coercive, why have social media companies rolled back their trust and safety policies? Both Twitter and Meta have withdrawn or weakened their COVID-19 misinformation rules. Twitter’s policy now says: “Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy.” Meta rolled back its policies on June 16, 2023: “Our Covid-19 misinformation rules will no longer be in effect globally as the global public health emergency declaration that triggered those rules has been lifted.” Meta is allowing for a “tailored” approach and still removes COVID-19 misinformation in countries where there is still a COVID-19 emergency. Similarly, YouTube rolled back its policy to remove disinformation about the validity of the 2020 election — a move that The Leadership Conference publicly and loudly decried.

This subcommittee called this hearing to talk about a frivolous and politically motivated legal case by state attorneys general espousing an unsubstantiated conspiracy about government civil servants to thwart legitimate government interest in the health, safety, and security of the American people and our democratic processes. It is immensely harmful for this subcommittee to use its legislative oversight power in a way that itself could be coercion of social media companies to ignore their own policies that they feebly and increasingly refuse to enforce, despite the very real harms to life and limb, as well as democracy.

If this subcommittee is truly concerned about the weaponization of government, then it should begin by refraining from calling hearings whose only purpose is to intimidate social media platforms into promoting the content of a particular political persuasion. If this subcommittee is truly concerned about government coercion of social media companies, it should consider how this very hearing could constitute government coercion to prevent social media platforms from enforcing their constitutionally permissible policies designed to keep us safe from hate, from bias, and from mis- and disinformation.

No matter our political affiliations, we all have a stake in free and fair elections, in public health, and in trust in government. It is long past time for us to work together to ensure that social media companies provide a platform for a robust exchange of views that is free from the toxic content that too often floods our feeds.

Thank you for inviting me to testify today. I am pleased to answer any questions you may have.