S07 E03: Centering Tech and AI on Civil Rights

Pod Squad

Image: Koustubh Bagchi Koustubh “K.J.” Bagchi Vice President Center for Civil Rights and Technology Leadership Conference for Civil and Human Rights
Photo: David Toomey David Toomey Voting Rights and Technology Fellow Center for Civil Rights and Technology, Leadership Conference for Civil and Human Rights
Photo: Claudia Ruiz Claudia Ruiz Senior Civil Rights Policy Analyst UnidosUS

Our Host

Kanya Bennett headshot Kanya Bennett Managing Director of Government Affairs The Leadership Conference on Civil and Human Rights and the Leadership Conference Education Fund

Contact the Team

For all inquiries related to Pod For The Cause, please contact Taelor Nicholas ([email protected]).

Episode Transcript

 

Kanya: Welcome to “Pod for the Cause,” the official podcast of a Leadership Conference on Civil and Human Rights and the Leadership Conference Education Fund. Where we take on the critical civil and human rights issues of our day, as we work to save our democracy. I’m your host, Kanya Bennett, coming to you from our nation’s capital, Washington D.C.. Today on “Pod for the Cause,” we will explore the latest tech and civil rights happenings, focusing on AI and mis and disinformation. For this conversation, we are joined by three smart and savvy, tech equity experts. Two happen to be my esteemed colleagues, and the other a distinguished coalition partner. Let us welcome my colleague K.J. Bagchi, Vice President for the Center for Civil Rights and Technology at the Leadership Conference for Civil and Human Rights. K.J., welcome.

K.J.: Thank you for having me.

Kanya: We are also joined by my colleague, Dave Toomey, Voting Rights and Technology Fellow for the Center for Civil Rights and Technology, again, here at the Leadership Conference for Civil and Human Rights. Welcome back, Dave. This is your second time on the show.

Dave: Yep. Thanks, Kanya. Thanks for inviting me again.

Kanya: And finally, let us acknowledge our coalition partner, Claudia Ruiz, Senior Civil Rights Policy Analyst with UnidosUS. Thank you for joining us today, Claudia, and we’re so happy you’re here.

Claudia: Thanks so much Kanya. I’m really excited to be here.

Kanya: So artificial intelligence, or AI, is a part of our everyday life. From Apple Siri, to Amazon’s Alexa, we have become dependent upon AI answering our questions, navigating our comings and goings, and even predicting who we are compatible with on dating apps. Now, look, this is what one of our podcast producers tells me, I know nothing about these apps. But what happens when AI is created to cause chaos, by manipulating social media algorithms, or intentionally feeding the public false information, or interfering with our election process? We realize that AI and other technologies, like every other aspect of our society can perpetuate harms and inequities if we are not intentional around its development and implementation. In response to these concerns, the Leadership Conference established the Center on Civil Rights and Technology to anchor emerging and existing technology in civil rights. So let me start the conversation there with the center, and turn to you, K.J.. K.J., you are at the helm of this center, again, the Leadership Conference Center for Civil Rights and Technology. Can you talk to us about the center’s mission, and how it aims to shape a fair and just society, with respect to current and emerging technologies? Talk to us about the civil rights issues the center will focus on, as it relates to the impact created by emerging technology?

K.J.: Sure, yeah, no, happy to. And again, thank you for giving us folks at the Center for Civil Rights and Tech, an opportunity to talk about the work, and then the creation of the recently launched center. I think, look, the important foundational point to make before jumping deeper into the conversation is that, you know, our nation and our world, we are experiencing a significant paradigm shift, right? New technologies, including AI, generative AI, are developing exponentially, and with significant implications for how we do everything. You know, how we work, how we learn, how we develop and share information, and even how we’re making decisions. And so, given the fact that there are many critical civil rights issues that have been starting to or have been for quite some time, interacting directly with technological systems. And so, in the most sort of common parlance, or when we’re, you know, just talking among friends or family, we’re thinking about AI in the concept of OpenAI. It’s sort of the big generative AI system tool that comes into play. People think about Facebook and Google as sort of big tech. But there have been automated systems and AI powered tools for quite some time, that have had direct interactions with the civil rights of individuals.

For example, facial recognition technology is a major one. Access to public benefits. Right? Many states have been using these sorts of automated systems to determine, you know, who is able to access public benefits. You know, AI has been used in certain criminal justice settings to determine what sentencing guidelines should be. And so, again, what Maya Wiley, our President and CEO, saw, was a need to create a central hub that would be able to interact with all of these civil rights issues. Pulling it out of sort of the programmatic part of the Leadership Conference, and giving it its own sort of entity, allows us to be in a position to work internally with other teams that are working on health care, housing, criminal justice issues, but also with our over 240 coalition partners. To inform them, to educate them, and to use those sort of coalition entry points to discuss what it is we need to engage on.

Kanya: Thanks, K.J.. So, Dave, let me turn to you my other Center for Civil Rights and Technology colleague, who as I mentioned at the start of this conversation, joined us just after the 2022 midterm elections to talk about AI and mis and disinformation in the context of voting. So today, as we head into the 2024 presidential, AI remains top of mind. And it certainly, again, remains the focal point of the work that you’re doing at the center. Can you talk to us about the center’s work, your work, to address the concerns that AI may further perpetuate election interference and trample voting rights?

Dave: Kanya, thanks again for inviting me on the podcast again. And I think it’s important to table set some of the key issues we’re seeing around online election disinformation. So online election disinformation is a threat to our democracy. It continues to confuse, it suppresses the right to vote, it spreads hate speech, and it’s otherwise disrupting our economy. The disinformation is anchored by the big lie. The big lie is the notion that Biden stole the election from Donald Trump. This has obviously been discredited. Joe Biden won the election fair and square. All the states certified. There was no massive fraud or steal that was going on. But false information of the big lie is the anchor for the rest of this information that we’re seeing around voting. And it’s still spreading rapidly. From high profile users, to disinformation spreaders on the internet, and it sometimes has gotten out into the mainstream media. And this leads the other key narratives and disinformation themes we see around voting.

We kinda in the last couple of elections now know the playbook of disinformation spreaders. They spread false information about voting procedures and policies. That vote by mail and ballot drop boxes are not secure. They have preemptive claims of fraud. They claim fraud is happening before the election’s even happened. A new one we saw in 2022 was increased online harassment of election officials. So, this is really the crux of the problem we see with disinformation. And the problem is that it gets out into the mainstream. The proliferation of election disinformation is directly leading to some states posing and even enacting laws that are narrowing the scope of voting. And they often use the false information that’s spread online about voting as the basis for it. So the Leadership Conference and the center, we’ve created infrastructure and have implemented strategies and tactical measures to address voting disinformation and hate speech. This has allowed us to advocate and respond quickly on policy issues and disinformation as it spreads. We have repeatedly raised our concerns with the social media platforms and tech companies, about enforcement of voter disinformation rules. Most of the platforms have rules saying you can’t spread false information about elections. They don’t enforce it with any regularity, and they particularly don’t enforce it against high profile users.

We also regularly advocate to Congress on election disinformation. Our CEO, Maya Wiley, twice in the last several months has testified before Congress on the problem surrounding election disinformation. We regularly interact with congressional members and key staff that are working on this. We also interact with the White House and key agencies that are working on this as well, to try to raise concerns there. The center also leads Coalition Leadership. The Leadership Conference, as most people know, is a large coalition of over 250 members of organizations. I lead, along with a couple of other organizations, I lead a working group of tech experts, voting rights experts, and civil rights experts, who meet regularly to come up with strategies and advocacy that we utilize on election disinformation. The collective effort of this has been really useful in helping us advocate and helping us advocate on behalf of the Coalition. Now, with the center, that we have, you know, increased brain power and increased resources to expand our work, I think we’ve laid a good groundwork at the Leadership Conference. I think the center will help us expand that further.

Kanya: Thank you, Dave. Claudia, both Dave and K.J. talked about the Leadership Conference not doing its work in a vacuum. We are a coalition of 240 plus member organizations, including yours, Unidos, the nation’s largest Hispanic civil rights and advocacy organization. So this means our center is your center. So talk to us about the work you are doing to ensure that civil rights and technology are married.

Claudia: What I might do is start with describing the way that UnidosUS as an organization is structured, because I think that’s what really enables us to really keep civil rights and civil liberties protections at the core, and is really the spirit of everything that we do. You know, our main mission is to advance health, wealth, and power, and advance inclusion, and equity, and opportunity for Latino communities. And we do that kind of along three pathways. One is through federal and targeted state policy and advocacy, which is what I do, and that’s where I am housed. And within that kind of policy and advocacy space, we touch on a number of various issues that impact Latino communities and immigrant communities. That includes health care, health care access, economic opportunity, education, immigration, and civil rights, and civil liberties, which also includes voting rights. Next, we also do a lot of programmatic implementation. So making sure that we can provide services and resources at the community level, through running a variety of programs, that also map on to each of the issue areas that I just named.

And then last but not least, I think, you know, the real heartbeat of UnidosUS is our own network of nearly 300 community-based organizations, that are Latino-serving, Latino-led organizations that provide direct services for communities at the very grassroots level. And so it’s kind of the combination of the three. And it really gives us a clear vision as to the ways in which AI and tech writ large is going to start impacting the rights and liberties of Latino and immigrant communities. When we started developing our sort of AI policy platform, we similarly included three main pillars. We kind of shorthand them, call them values, voices, and investment. But I think it actually kind of mimics our own UnidosUS structure, in the sense that our goal is to, through policy and advocacy, embed constitutional values and democratic norms into AI systems, and standards, and policies. So making sure that any kind of tech or AI driven tools really kind of produce an advanced fairness, equity, inclusion, opportunity for all and equally. Next is voices. So it’s making sure that, you know, those impacted communities that have historically been left out of governance processes, that have historically been left behind by tech, that they have a robust and meaningful seat at the table to actually, you know, provide input and feedback into how these tools are designed, how these tools are regulated, and ultimately, how they’re also deployed.

And last pillar is investment. It complements the first two in the sense that we recognize that impacted and historically marginalized communities have also been historically underresourced and underfunded, which results in a number of gaps that we see writ large across issue areas, but certainly even within the tech space. So if you think about the digital divide, the digital skills gap that exists within Latino communities, you know, these are largely a product of underresourcing and underinvestment. And so we also call for concerted targeted investment to kind of skill up and build the capacity of these communities, so that they can have a meaningful and robust voice once they’re brought into the governance process.

Kanya: Claudia, thank you for that. Unidos is doing some great work, again, in partnership with our coalition and the center. You talked about the real need for a meaningful seat at the table. And Dave, you were talking about all of the stakeholders who need to be at this table. And so, obviously, the coalition, the advocates, the tech sector, members of Congress, you mentioned. K.J., let me go back to you. Obviously, the center is going to be critical to leading these conversations and getting everyone around this table. So, talk to us about the center’s role in bringing diverse stakeholders together to make policy recommendations to address AI and some of the other pressing tech issues of the day.

K.J.: Stepping back quickly, you know, when we have new interns, policy interns or policy council join in this organization and other roles I’ve had, I always talk about how tech policy, the ecosystem is really driven by sort of three major stakeholder groups, right? The first is industry, the second is government, and the third is civil society. You know, industry and civil society are pushing on government to do what they want them to do, right? Coming from an industry association, I can tell you, they’ll continually say we just want to know what the rules of the road are. But it’s more than that. Right? They want less burden on the actual ability to be compliant or accountable. And civil society, rightfully so, right, our constituents are communities of color, underserved communities, other groups that, like Claudia mentioned, as representing Unidos, that specific constituency, my previous role working at Asian Americans Advancing Justice, AJC, focusing on that constituency. And so the Leadership Conference, I think, the reason, you know, I’ve been so excited to be at the helm of starting at the center is that, we are a coalition driven organization. And the center will keep that spirit in the work that we do.

And we do that in multifaceted ways. The first is, you know, we have these coalition entry points. We have a task force, a media telecom task force. We have the civil rights privacy and technology table. You know, there are a number of other sort of AI working groups. We work to bring in members of our 240 plus coalition group to be involved in that space. But on the other end, Claudia said something about pillars and it reminded me something I probably should have mentioned at the top, is that, look, the role of the center is really threefold. The first is, advocacy, which is continuing to do what we do with government, you know, working with the fine folks in government affairs, such as yourself and your team. Also working on the education piece, educating our coalition members. In order for us to be a successful strategic hub for the coalition, we also have to bring the coalition along. And one of the challenges in doing that when we’re trying to engage our diverse set of stakeholders, is a lot of organizations don’t have the capacity, right? They don’t have staff to dig into some of these issues.

And so with the capacity we’ve now built within the center, what’s important for us is our ability to educate coalition members and bring them along. And the third pillar, which is something that, you know, we haven’t done a ton of in the past, but what I hope to really build up in the future, and that is research, right? Focusing on being sort of a thought leader on these issues of civil rights and technology policy. And we can do that in many ways, right? We can focus on one specific constituency and say, look, what is the role of facial recognition technology on certain communities of color. Empower our advocacy work with government by understanding and studying the impacts it has on these communities. So, you know, on a forward-looking lens, that’s where we’re at, broader education with the coalition, more sort of research to amplify our advocacy. But also building on that foundation that Leadership Conference was founded on, which is, working with those groups that want to be at the table, and making sure that we’re bringing everyone along.

Kanya: Thank you, K.J., for that. And I know the center is multi-issue and multi-focused, but I do want to get back to AI. I really think, today, you cannot talk about the latest tech happenings without opening the paper, scrolling the paper, and seeing multiple headlines referencing AI. And obviously, Dave, as you were touching on, we’re thinking about AI in the context of the election and voting. I mean, this is the moment that we’re in. And so with AI generated content causing so much confusion and the realm of elections and politics. Talk to us about how AI spreads, talk to us about the threat that AI presents to election disinformation, and talk to us about what the center’s charge is really in this specific context.

Dave: The concern around AI with election disinformation is that the rapid growth of AI that specifically happened since the last midterm election in 2022, it has the potential to turbocharge the volume and speed of voting disinformation. Some of the narratives I mentioned earlier, we think in 2024, that those narratives are going to pop up again. So it’s not so much we don’t know what’s going to be said. I think we have a good handle now on the types of narratives that disinformation spreaders use to send out false information. So these false narratives again, are sure to come up again this year. But the problem with AI, the potential is that AI can increase the ability and scale to spread false information propaganda, which will further leave voters confused and questioning what they see or hear. I mean, voters are already having trouble trusting what they’re seeing on social media, and what they’re seeing in the news. If you throw AI into the picture, potentially, it’s going to make it way worse. Our coalition partner, the Brennan Center, who’s worked extensively on voting rights issues, you know, they pointed out that elections are particularly vulnerable to AI disinformation. Because the way AI tools work is, you know, they’re most effective when they produce content that’s similar to current databases. You know, they’re mining from a lot of information that’s out there already.

You know, as I mentioned earlier, since the same election narratives are bound to come up, the underlying election disinformation in the data that AI tools are using can make them a potential time bomb for future election disinformation. And the problem is, if we have more of this stuff that’s coming out as AI generated, it’s going to accelerate the loss of integrity and security of our election system. And it can dramatically interfere with the right to vote. One question I get asked a lot is, how can AI spread this information? And part of the issue is, at this point, it’s somewhat hypothetical. We’ve seen some specific instances of AI generated content that’s been used to spread false information. But it hasn’t happened on a mass scale yet. Now, it may not have happened on a mass scale, but it may have happened on an increased scale. So we don’t really know exactly how bad it’s going to be. But I think we’re anticipating the worst and hoping for the best. But I think disinformation spreaders can plant false information through the use of AI. And that’s through, you know, deep fakes. These are deep fake audio and visuals. These are, you know, you see someone on the screen, it looks like that person, and it sounds like that person, but it’s really not. It’s AI generated. It looks, you know, really real.

You know, ChatGPT and chat boxes that some people are using to get information, they could spread false narratives because they can pull from false information that’s already out there. AI can send false or deceptive comments from fake constituents or advocates, or set up fake news sites. There’s lots of ways this can be spread. And we’ve seen a few examples of this. Just this week, you know, right before the New Hampshire primary, there was a false robo call that went out with President Biden’s voice. It sounded like him, and people were getting these robo calls. And Biden’s AI voice was saying, “Don’t vote on Tuesday in the primary.” Obviously, someone who is against Biden, we don’t know exactly who’s behind this, but it’s a tactic to suppress voting. We’ve seen other deep fakes, you know, isolated deep fakes. There was one of Elizabeth Warren last year, Senator Warren from Massachusetts, where allegedly her AI generated image was saying, Republicans shouldn’t be allowed to vote in 2024. That’s obviously not true. There was even a deep fake, you know, Donald Trump in one of his recent court appearances, there was a deep fake that went out of him being dragged away by police at the proceedings. That obviously didn’t happen.

You know, there’s all kinds of ways this can happen. And I think the challenge for advocates is, you know, with AI disinformation, we’re almost trying to figure out what it is and solve it at the same time. That’s what Congress is struggling with. Even the entities like c and Microsoft who are putting out AI, they’re still trying to figure it out, too. So it’s all happening so quickly, and we just have to be on guard for how it’s going to move, and what its gonna look like, and how we can counteract it.

Kanya: Thanks, Dave. All sort of troubling. And I certainly want to think about this, Claudia, in the context of communities of color. And I know Unidos is thinking about this too. You’ve raised concerns about how we’re building and training AI tools. That we’re using this inequitable data. And we’re seeing, right, disinformation, as Dave describes it, we’re seeing voter suppression attempts reaching our communities of color. So talk to us about the challenges posed by biased data, and the potential consequences for communities of color and immigrant communities.

Claudia: I might start with an example related to voting because it kind of tacks on to what Dave said. And that’s, you know, one kind of prime concern and issue for Latino and immigrant communities is language access. When it comes to some of the large language models like ChatGPT, Claude, Bard, you know, they’re openly trained on English data primarily, first and foremost. Not only that, most of the internet is in English. So inherently, right, there’s already a lack of parity between English and Spanish, and English and any other kind of non-English languages as well. So inherently, you’re already gonna have gaps when it comes to equity and access, and a lot less ability to actually check the accuracy, right, of an output from a large language model. So that’s kind of one example of the ways in which there can be disparate outputs, right, created from sort of an AI driven-large language model. Something that’s a little bit more foundational, and kind of like a very ground level example is a study that I like to reference that Unidos did back in, I think, 2021. And what we did was, we launched a project that looked at, for about a 10-year span, how many people were killed by police while in custody or while being taken into custody?

Even though there’s a federal law, the Death in Custody Reporting Act, that’s been on the books for about 20 years, the Department of Justice has never actually put together a full-fledged count of just how many people are killed by police each year. And so what we did was we tried to look at some of the accounts that journalists were putting together like the “Guardian,” that academics and researchers were putting together. And what we found was that despite, you know, best efforts, good faith efforts to really capture just who was killed by police, it was really incomplete. Not only was it incomplete because there’s no actual database to measure up against, but it was also incomplete because of the ways in which they categorized individuals killed by police was under black, white, or other. And so what we did is we ran a research project that used surname data to try to match up all the surnames that we could get of people killed by police, and match it against an ethnicity or race, and able to get a more accurate and complete picture, right, of who are the kinds of people being killed.

And what we found was that the average estimates for the number of Latinos killed by police, again, very good faith efforts, the accountings and the numbers that we had were an undercount of nearly a quarter. And we found that AAPI individuals were killed at nearly six times the rate that is commonly cited. And the rates of deaths for Native Americans went up 50%. So you can see that, right, that’s just one example of the ways in which we collect and process data, you know, at the government level, can produce misnotions. I mean, that’s its own form of misinformation, right? Because then we’re not actually telling the full story of what is happening when people are killed by police. But now take it one step further. And if this is the training data that’s being used to train large algorithms, and ultimately, that’s fed into an AI machine for something like let’s say, predictive policing, or recidivism algorithms, we’re going to come out with very inherently skewed outcomes. And this can result in massive harms for communities of color, and for immigrant communities as well.

So those are kind of like two different kinds of examples of the ways in which the issue around training data biases, right, some of them can be very obvious, very ground level, and some of them are not necessarily something that come to mind right away. For that last part that I just said, you know, another example that I’ll close out with is there was a study that was run out of Duke University Hospital, where they were looking at trying to build an algorithm that could identify sepsis early on in kids. Because, you know, it can be very fatal. I think something like 10% of children die when they get sepsis. So Duke researchers tried to build an algorithm that could predict and set markers for when a child needed to get treatment and prevent any kind of fatalities. And what happened was, despite researchers trying to mitigate biases, they found that they couldn’t account for all of them, because of the ways in which they had trained the algorithm. And so what the sepsis algorithm, the outputs it was producing, suggested that Latino kids were healthier and actually got sepsis at rates slower than white children. And that actually wasn’t the case. What had actually been the problem is that, Latinos often have delayed access to medical care, and therefore they have to wait longer to see a doctor. And that was why they were showing to be healthier, because they weren’t showing up at the hospital or the doctor’s office on time.

And so these are kind of examples of the ways in which a lot of the time, not knowing what we don’t know, can produce disparate outcomes and outputs. The fact that, you know, not actually having equitable data collection processes can create bias, and then also not paying attention to the unique democratic principles and constitutional rights that we all have, not honoring those within the building of these large language models, for example, how all of these taken together can produce different examples of harms and inaccurate outcomes.

Kanya: Thank you, Claudia, for those excellent examples that really sort of hone in and drive the point that if we have all this data that is incomplete, surely we now we are not going to get the benefits of the AI tools that we’re seeking, right, for the betterment of our society. I know our justice team will be pleased that you lifted up the Death and Custody Reporting Act. It was a very sore subject. That’s actually a statute I’ve worked on as long as it’s been around, which it’s sad that we’re not yet in a place where we have proper resolve. I want to talk to you, Claudia, I want to keep you in this conversation as we turn to solutions. You talked about data in the justice context, in the health context. And your organization has also acknowledged, though there are the shortcomings, there is a real potential for AI to help us communicate across language barriers, to help us improve health, to help us improve education. So, Claudia, how do we get there? How do we ensure a participatory and inclusive process, again, particularly for communities of color, who are so often not at the table as worth developing AI and other tech solutions, and ensuring that those solutions align with civil rights values?

Claudia: It’s a shame that we’ve been able to be so innovative in the technology space, you know, and have all these wonderful technological innovations, but that we haven’t been able to, at the same time, find innovative ways to kind of restructure our democratic processes, and ways to kind of tap into grassroots level communities and marginalized communities in a way that is advancing equity, truly, in practice and not just in name. And so that’s kind of one thing that we call for within our AI policy platform, is a reimagining and a rethinking of the ways in which we can bring people to the table, give people a seat at the table. Again, I think the spirit of that idea is definitely encapsulated within our two pillars around voices and investment. Making sure that we are actually consulting with the communities that are most impacted by these technologies, and who have also historically been left out. But it’s important to do that for a number of reasons. And I think first and foremost is the idea that we cannot have a purely technical approach to regulating AI. At the end of the day, tech and AI should be measured against a socio technical standard, meaning that they should be designed to be rights protecting… They should be designed to be human augmenting, not necessarily human replacing.

And so in order to actually do that, you know, we need to be meaningfully consulting with the very people who are being impacted by those. And at the same time, you know, I think it’s also important to kind of identify the ways in which we can repurpose and use AI tools to actually advance these processes as well. So the fact that you mentioned at the beginning, you know, we do see a lot of potential for translation services. That’s one thing that we are particularly excited about, is the ways in which, you know, live time translations can happen for immigrants or non-English speakers who are seeking benefits access, who are seeking health care access, all these kinds of things, we really truly see a benefit. We can see the ways in which ed tech and AI applications in the education space can really help non-English language learners kind of keep pace. But the reality is, is that, you know, the policies around how each of these tools are used in a very specific context need to be done in deep consultation consistently, often and meaningfully. Like I said, what the people who are impacted the most.

So you know, I know K.J. and Dave were talking at the top of the conversation about giving everybody a seat at the table, having a diverse set of working groups, right, to kind of identify and predict the harms that we can foresee and prevent those. And I think it’s important to not only think about it as impacted groups, but even kind of like who are the different players. To use education for an example, right? It’s bringing in educators themselves, right? It’s bringing in students, it’s bringing in parents, it’s bringing in school administrators. It has to be the entire ecosystem that gets brought in because, you know, otherwise, we can’t always know what we do not know. The only way that we can do that is by actually having true inclusivity. And doing that in governance processes that promote inclusivity, and equity, and elevate voices that need to be heard that have been left out historically.

Kanya: Thank you, Claudia. And, Dave, let’s go back to you now Again, as we think about this ecosystem, as we think about solutions. Let’s talk about voters. Again, you’ve talked about how AI and disinfo are affecting our voters. How are we going to ensure that civil rights and safeguards are in place to protect our democracy? And you talked a bit about platforms. How is the center going to help bring accountability to platforms?

Dave: Yeah. So I think this is a key issue, Kanya, because when there are efforts to suppress the vote, whether it be through disinformation or AI, or the more traditional tactics that have been used for decades to suppress votes, this isn’t really a political issue. It’s not a partisan issue. It’s a democracy and a civil rights issue. You know, undercutting voting rules and voting rights is really undercutting democracy. I mean, safe and secure voting is one of the pillars that’s made the United States functional and work as a democracy. When there are efforts to discredit that, or to suppress that, or to go after communities of color in particular, and try to divide and marginalize voters, it’s undercutting democracy and the rights of people to exercise their power to vote. When you have tactics like online disinformation and potentially AI disinformation being spread, you know, it leads to situations like what happened on January 6th, three years ago, with the insurrection. I mean, you can draw a straight line between the time that the election was called for Biden till two months later to the insurrection, with the amount of disinformation that spread about the voting count. And the challenges to the voting that were led by Donald Trump and other allies of his to try to throw shade or try to overturn the election.

A lot of that started with online disinformation that was being spread about how the vote was being counted, and false information about what the results were, and things of that nature. We actually saw more disinformation in 2020 after the election ended than before. So that all lit the fire for what happened on January 6th. And that’s, you know, a direct threat to our democracy. So, one of the things we’ve worked on a lot at the center and at the Leadership Conference is platform accountability. We’ve repeatedly raised our concerns with platforms about the way they deal with voting disinformation. As I said earlier, most of the platforms have some degree of rules to varying levels, that basically say you can’t post false information about voting. And you can’t post false information of the results of elections, and things of that nature. But they don’t enforce them consistently. And as I mentioned, against high profile users, they don’t really do it at all. Since Elon Musk took over Twitter, it’s gotten worse. Elon Musk took over Twitter, changed the name, called it X, and basically eliminated the whole trust and safety team there. And basically said he’s not going to enforce disinformation about elections.

What we’ve seen since then, is Meta, YouTube, and others, have started to scale back their enforcement of it too. I think Elon Musk saying that kind of gave the other platforms an out to do it. They say they want to work on it, but their performance doesn’t indicate they really want to deal with this. And I think that’s what we try to consistently talk to the platforms about. That, you know, if you don’t enforce your own rules on this, these are rules that they’ve set up. We’re undercutting how our voting system can work. We’ve urged them to expand the scope of disinformation that they have to enforce their own rules consistently and particularly against high profile users. We’ve provided content that we think violates their policies. So we’ve tried to work with them to try to get a dialogue going and try to find solutions. It’s gonna be challenging this year. Because as I said, they pulled back on their enforcement. We are in the process of sending some refreshed demands. The working group and others are in the process…the working group I mentioned earlier that we lead, the Elections Hate and Disinformation working group, of which Claudia, Unidos is an active member. Claudia is super helpful with me on these issues.

We’re working on refreshed demands that we can give the platforms to let them know what we think needs to be done with the coming election. And there’s a focus on AI disinformation in those demands. So we’ll be releasing them in the coming weeks. You know, we have to continue to push them to make changes. And we have to continue to, you know, work the rest of the government, work Congress, and the administration, and the key agencies that are concerned about this as well. So, it’s not something to give up on, it’s up hill, but we did make some progress before the 2022 elections. So we’re hopeful we can make some more progress this year as well.

Kanya: Absolutely. I know it will be critical during this high stakes election season. Thank you for that work, Dave. Thank you, Claudia. And K.J., no pressure at all. They’re at the helm of this Center for Civil Rights and Technology that really aims to be a beacon of hope for a future where technology empowers, not discriminates. How does the center plan to actively engage the public? Is there an educational, legislative or advocacy charge for our listeners? It sounds like there’s a lot we all need to rally around and rally around together?

K.J.: No, I appreciate that. And I appreciate we’re all trying to end on an optimistic note, given all the sort of issues we’ve seen with the development of AI tools and systems. So, look, let me start by just saying this. Many of us, not all of us, but many of us are in this line of work because we’re tech optimists. That term has been a little twisted by industry folks. But I do think even civil society, you have folks who truly believe that tech can be a tool to better connect us. For a lot of us during the pandemic, we used tech tools to connect with family in foreign countries, to be able to work on projects collectively, even our ability right now we’re in, you know, five different locations. And for a lot of us, especially people of color, it’s a way to be connected to their culture, right? Like, you know, my father passed away, and I can still have access to the music he listened to when he was younger because of technology.

The critical part, though, is that it’s not innovation if it’s not building in equity. It’s not innovation if it’s leaving any of us behind. You know, the center, especially as it comes to AI, we do more work than just AI. But of course, today we’re going to talk about artificial intelligence primarily. You know, this is an area that your listeners should know, does not have strong governance tools, does not have strong regulation. And the challenge with bringing civil rights claims, as you know, as all of us on this podcast right now know, requires showing that there was actual harm. And the issue with these tools is that you don’t know sometimes. And so I think, you know, the center’s sort of laser focused goal this year and will continue to be is focusing on that AI regulation and governance. The biggest piece we’ve seen coming out in this space has been the White House AI Executive Order and subsequent OMB guidance. Our partner organizations or coalition members, you know, the center staff worked really hard to make sure that we are giving feedback to this executive order to ensure that all the loopholes are closed. And we’ll be working this year to hold the agencies accountable, right, as they start executing on that executive order, and implementing and making sure that those agencies actually do what they say they’ll do.

And the second major goal for us this year, really is to focus on voting rights. And that’s a lot of Dave’s work. In terms of what we’re sharing with the public, it’s also important to remind your listeners that we are three months old. You know, when this is being taped, it’s at the end of January, I started in November. But the goal is to ramp up our ability to produce outward facing documentation and information. So taking that executive order as it impacts justice issues, health issues, housing issues, and creating information sheets that can be easily disseminated is something we’re working on. Dave talked a little bit about the misinfo disinformation work that he does. We analyze that misinformation. We work with Spitfire and Common Cause to develop inoculation messaging. And that is disseminated to grassroots organizations, right, that are doing this empowerment work, to ensure that they’re aware of how to sort of counter the misinfo and disinfo. But in terms of a public charge, really, I think two things. One is, these topics may seem daunting, but it’s important for everyone to really understand and stay engaged on it. And the second point I’d quickly make is just, you know, don’t be distracted. Especially in tech policy, it is very easy to have your eye come off the target.

So if we’re looking at hate speech, for example, a lot of the focus is on X, right, formerly Twitter. But as Dave mentioned, the big lie. It’s Google and Meta that have changed their policies about the big lie, saying that they will no longer take that information down. When it comes to data collection and surveillance, of course, we’re looking at law enforcement, of course, we’re talking about the big tech companies. But the biggest sort of enforcement action that came out from the Federal Trade Commission in the last few months, was focused on Rite Aid, the ability for consumers to go in, have their sort of biometric information collected, and then have that used against them in the future. The FTC came out with a very aggressive consent order saying, “Rite Aid, you can no longer do this for five years. And if you do want to do this, here’s a criteria you have to employ.” Again, the point being, don’t be distracted, right? Big tech may be the one in the news, but disinformation is being collected everywhere. And there are small companies that none of us have heard of, but they’re also developing tools using publicly available data to ensure that they’re finding ways to create shortcuts, which yield impacts on our communities.

And so it’s important for us to keep our eye on the target, it’s important for us to be engaged. The best way, you know, ask folks to jump on, of course, the civilrights.org website, check out the center. You can see some of our work in the past, some of our positions and so on, engage us that way. But beyond that, look, I think these battles aren’t new. And I think it’s just important to remind folks about that. Surveillance is not a new concept, marginalization isn’t a new concept, exploitation. These aren’t new impacts that our communities of color and underrepresented communities face, it’s just a new Battlefront that we have to ensure that we’re all empowered to really take on. And to learn the vocabulary and to learn sort of where the best venues are to have these conversations. And the center hopes to do that in the future.

Kanya: Thank you, K.J.. We look forward to those conversations. And I think that is the perfect way to end this one. Thank you so much, K.J., for joining us today.

K.J.: Thank you.

Kanya: Claudia, we were so happy to have you here.

Claudia: Thank you so much. This was great.

Kanya: And Dave, always a pleasure. Glad you were able to come on again.

Dave: Yeah. Thanks for inviting me again, Kanya. I really appreciate it.

Kanya: Thank you for joining us today on “Pod for the Cause,” the official podcast of the Leadership Conference on Civil and Human Rights and the Leadership Conference Education Fund. For more information, please visit civilrights.org. And to connect with us, hit us up on Instagram and Twitter, @civilrightsorg. You can text us, text “CIVIL RIGHTS”, That’s two words, “CIVIL RIGHTS,” to 52199, to keep up with our latest updates. Be sure to subscribe to our show on your favorite podcast app, and leave a five-star review. Thanks to our production team, Shalonda Hunter, Dena Craig, Taelor Nicholas, Oprah Cunningham, and Eunic Epstein Ortiz. And that’s it from me, your host, Kanya Bennett. Until next time, let’s keep fighting for an America as good as its ideals.

By opting in to text messages, you agree to receive messages from the Leadership Conference on Civil and Human Rights. 4 msgs/months. Msg and data rates may apply. Reply HELP for help, STOP for cancel.