Industry Accountability: Center for Civil Rights and Technology

The Center for Civil Rights and Technology holds industries that create and use technology accountable to ensure their platforms and products are safe and fair for everyone. The Center advocates for social media platforms to use common sense content moderation and fact-checking to combat online lies and hate and for the companies that use and create AI systems that enshrine and protect individuals’ civil rights, not harm them.

Holding Meta Accountable for Allowing Free Rein for Lies and Hate

For close to a decade now, the civil rights community has advocated for changes to online ecosystems that allow for the perpetuation of violence-inciting speech, misinformation, and disinformation. We have engaged Meta (and its previous iteration, Facebook) directly to address these policies and practices that allow our communities to be harmed, threatened, and silenced.

In a step to hold Meta accountable for their announced policy shift to remove fact-checkers and content moderation guardrails around the topics of immigration and gender, The Leadership Conference’s Center for Civil Rights and Technology launched a digital campaign to call out this failed practice, the linchpin being a petition to gather signatures opposing Meta’s content moderation rollback.

Real-world consequences of online lies and hate

Online lies and hate have real-world consequences. Last year, real voters’ eligibility were thrown into question, and threats of violence hit local communities like Springfield, Ohio. Aid workers were threatened following Hurricanes Helene and Milton. Election administrators resigned because of threats they faced online for doing their jobs. Mis- and disinformation leads to dangerous policies — echoing lies about immigrant and trans communities is being used by bad actors to justify the hateful direction he hopes to take our country. 

Failure of community notes

Zuckerberg announced that his company would utilize the same model of content moderation used by X (formerly Twitter) – a practice called “community notes.” Community notes without professional fact-checking do not work to combat mis- and disinformation online. Researchers estimate that Facebook and Instagram users could encounter at least 277 million more instances of hate speech and other harmful content each year because of this decision.

Content moderation keeps us all safe

Content moderation, done smartly and carefully, increases everyone’s access to accurate information. While no one method will catch all the lies, it ensures fewer dangerous ones sneak through. Our civil rights must be protected on social platforms to ensure a vibrant and open democracy and diverse online communities.

Innovation Framework

Banner image: Innovation Framework

Technology is shaping nearly every aspect of modern life. While technological progress can benefit everyone, many AI tools can also carry tremendous risks. These risks are not theoretical. Defective AI systems mean that you could pay more for the products you buy, fail to get considered for the job you applied for, or unfairly pay more for your insurance. Faulty AI systems can deny someone access to public benefits or even falsely accuse them of a crime. We have seen this happen time and time again.

AI is especially harmful when it automates existing biases against marginalized communities, including women, people with disabilities, immigrants, and communities of color. Rather than entrench faulty AI, further existing bias, or automate discrimination, technology should be safe and fair for all of us.

The Center for Civil Rights and Technology is building the Innovation Framework, recognizing that in addition to policymakers, companies investing in, creating, and using AI and emerging technologies have a responsibility to ensure that the systems they develop and deploy respect people’s civil rights. People need assurances that the technology that makes decisions impacting them actually works, and works fairly. Now is the time to move beyond principles and implement concrete measures to ensure that the technology being used that impacts their lives have appropriate guardrails. Companies and the people that work there have front-line responsibilities to ensure that goal is achieved.

The Center’s Innovation Framework will provide a foundation for assessing how the industry is incorporating principles such as safety and fairness into the development of their products. More information is coming soon.


Return to Center for Civil Rights and Technology ›

Splash Statement