Civil Rights Comments to the FCC on AI Ad Disclosure Rulemaking

View a PDF of the comments here.

September 4, 2024

The Honorable Jessica Rosenworcel
Chair
Federal Communications Commission
45 L Street, NE
Washington, DC 20554

Re: Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements, MB Docket 24-211

Dear Chair Rosenworcel:

On behalf of The Leadership Conference on Civil and Human Rights (The Leadership Conference), Common Cause, United Church of Christ Media Justice Ministry, and the undersigned organizations, we write in response to the Federal Communication Commission’s (FCC) notice for proposed rulemaking regarding the disclosure and transparency of artificial intelligence-generated content in political advertisements.[1] As discussed in these comments, the FCC should take action and use its existing authority to provide clarity on this issue.

Harms from AI-Generated Political Ads Call For FCC Action
Academics, researchers, and advocates have been sounding the alarm about the use of deepfakes in our elections since before the dramatic rise in interest surrounding artificial intelligence (AI) among policymakers this year. In 2019, both the Senate and House of Representatives held hearings on the challenges of deepfake technology, in which experts educated members of Congress about potential risks to democracy and national security stemming from malicious use of the technology.[2] The general public is also beginning to understand the risk posed by the use of AI in our elections, with surveys showing anywhere from 70 percent to 85 percent of people having concerns about the role AI deepfakes and other AI-generated content could play in the spread of misinformation.[3] At the same time, there are mounting concerns about the ability of viewers to recognize deepfakes. For example, a study by the Rand Corporation found that 27 percent to 50 percent of respondents were unable to identify deepfakes related to climate change; disturbingly, adults and educators were the most vulnerable to being fooled by deepfakes.[4] The current lack of federal regulation is creating a high degree of uncertainty going into the 2024 presidential election.

Bipartisan members of Congress support legislative solutions to the issues created by deepfakes in our elections,[5] and more than 17 states have passed some type of law regulating deepfakes.[6] While not all of the proposed and enacted federal and state laws related to deepfakes deal specifically with political content, they highlight a growing consensus on the need to address the myriad of problems that are arising from the use of AI in media creation. Attempts to push the Federal Election Commission to act, however, have stalled.[7] The FCC’s proposed rule would complement existing state laws and potential congressional action by providing voters with transparency into the use of AI in political ads.

Specific Harms to Vulnerable Communities
Black, Latino, Asian American Pacific Islander (AAPI), and other communities of color have historically been targets of voter suppression and disinformation campaigns.[8] These campaigns have been orchestrated both by outside groups,[9] and by official campaigns themselves. These developments show no signs of changing with the ever-increasing use of generative AI. Over the past six years, online trolls have frequently impersonated Black users online, attempting to sow distrust and suppress turnout.[10] Other bad actors have also taken advantage of the failure of social media platforms to devote sufficient resources to address disinformation targeted at the Latino and AAPI communities that is designed to create doubt in our political institutions.[11] People with disabilities also have to deal with the consequences that stem from misinformation and the voter suppression campaigns that are often driven by false information.[12] Generative AI now gives bad actors and political campaigns more ability to commit these practices with increasing precision and realism. Therefore, the need for the FCC to require disclosure for AI-generated content in political ads is more pressing than ever.

Disclosure is in the Public Interest and Contributes to a More Informed Electorate
Public polling has shown that most Americans are concerned about the use of AI deepfakes to spread disinformation, with the highest concern for the use of deepfakes in the political context.[13]AI deepfakes and other AI-generated content have the potential to dramatically boost election disinformation and threaten the integrity of our elections, to the detriment of every person and political party. People in the United States are increasingly inundated with lies and manipulative content from both domestic and foreign actors and, as a result, are losing faith in our democracy.[14] The precipitous growth in election disinformation since 2016 has led to a decline in public trust in our elections and their results.[15] People trust fewer and fewer news sources. The proliferation of manipulated image, voice, or audio content from campaigns will further deteriorate Americans’ trust in our media and our institutions. In this environment, AI deepfakes can only supercharge disinformation and increase distrust.

Additionally, non-English speakers are particularly vulnerable to fraudulent campaign communications.[16]  Accurate and reliable information is often not available in non-English languages.[17]Accordingly, the FCC should consider requiring disclosures in the primary language of the broadcast, if other than English, to ensure equity. Further, the FCC should also consider making disclosures available to people with disabilities.

In this increasingly complex information ecosystem, it is critical for people to know that the images, videos, and other media that they view in a campaign ad from a candidate for public office are authentic. It is not reasonable to expect individuals to discern when AI has been used to generate an image. In the world’s oldest continuous democracy, an individual running for public office should have a higher bar for authenticity and integrity than the disinformation-spewing troll accounts on social media.

The Commission has Authority to Adopt the Proposed Rules
As the Commission explained, the Communications Act places a high importance on the role of political advertising on broadcast outlets. Under Section 315, the so-called “equal time” rule, if a subject licensee permits one legally-qualified candidate to place ads, it must permit all other candidates for the same office an “equal opportunity” to do so.[18] This rule applies to all broadcasters, which includes traditional broadcast television and radio, satellite radio (Sirius/XM),[19] satellite television that originates programming (DISH and DirecTV),[20] and cable television.[21] In addition, all licensees except cable television and non-commercial broadcasters (public TV and radio) are affirmatively required to offer federal candidates time for advertising pursuant to Section 312(a)(7) of the Communications Act.[22] Similarly, record-keeping is a longstanding part of the political programming regime and is critically important for transparency and journalism.[23]

The Proposal Complies with the First Amendment
As the Commission explained,[24] the proposed regulations could be subject to several tests with respect to the First Amendment, depending on the level of scrutiny applied. Under the intermediate standard, restrictions are upheld when the government advances “important governmental interests unrelated to the suppression of free speech” and does not “burden substantially more speech than necessary to further those interests.”[25] If strict scrutiny applies, the disclosure requirements will be upheld if the government’s interest is “compelling,” and the rules are both “narrowly tailored” to further that interest and the “least restrictive means” of accomplishing the desired objective.[26]

The Commission is correct that the proposed rules meet those burdens. The Commission’s proposal combines two existing regimes. It takes the framework of the payola rules, a required disclosure that often comes in the form of an on-air disclosure,[27] and combines it with the obligation to request information from entities placing political advertisements and cataloging them in the public file.[28] The FCC’s sponsorship disclosure obligations have been upheld over the years and rarely challenged, as have the political programming rules.

The record-keeping burden is not heavy. It is similar to the burdens upheld in McConnell v. FEC, 540 U.S. 93 (2003), where the Supreme Court upheld requirements that broadcast licensees document requests for political advertising time by a candidate, references to a candidate, or to “an issue of national legislative issue or public importance.”[29] The Court found the burdens were not great and the record-keeping obligations helped the FCC to enforce its rules and helped the public monitor broadcaster behavior.[30]

The governmental interests here are strong. Disclosing the source of information has been shown to assist people in evaluating those political advertising messages.[31] Sponsorship identification messages shape the “considerations that people take into account when making judgments about political candidates or issues.”[32] Public access to information about the political messages they receive is of the highest importance with regard to the goals of the First Amendment and the Communications Act—permitting a vibrant discourse in the marketplace of ideas.[33]

The Public Database Must be Functional and Easy to Use
Public disclosure of these advertisements is critical, ideally through an easy-to-use database that journalists and members of the public can both understand. In the years before the FCC finally put broadcaster public files online, organizations resorted to crowd-sourced in-person visits to dusty file-folders in broadcast studios.[34] Similarly, incomplete uploads to databases can make the public-facing data of limited utility.[35] The Commission should take this opportunity to upgrade its public file databases to make them easier for the public to use; currently, the interfaces at https://publicfiles.fcc.gov/ are not user-friendly. For example, a person seeking information at a local TV station in Washington, DC must know the call letters of a TV or radio station and then drill down into the public file about political programming.[36] Previous efforts to create easy-to-use databases for the general public and for journalists to access the FCC’s existing political programming public files appear to have fallen by the wayside.[37] They should be revived.

Conclusion
Thank you for considering our views about AI deepfakes in our elections. We look forward to working with you on this and other issues of importance to our country. If you have any questions about this letter, please contact Cheryl A. Leanza, United Church of Christ Media Justice Ministry, at [email protected], Ishan Mehta, Common Cause, Media & Democracy Program Director, at [email protected], or Jonathan Walter, The Leadership Conference, policy counsel, at [email protected].

Sincerely,

The Leadership Conference on Civil and Human Rights
Common Cause
United Church of Christ Media Justice Ministry
Access Now
Asian Americans Advancing Justice – AAJC
Japanese American Citizens League
National Black Child Development Institute (NBCDI)
National Consumer Law Center, on behalf of its low-income clients
National Disability Rights Network (NDRN)
NETWORK Lobby for Catholic Social Justice
Public Citizen
Sikh American Legal Defense and Education Fund
The Trevor Project

 

[1] Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Media, Notice of Proposed Rulemaking, MB Docket No. 24-211 (rel. July 25, 2024) (NPRM).

[2] William A. Galston, “Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics,” Brookings (Jan. 8, 2020), https://www.brookings.edu/articles/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/.

[3] Chris Jackson, et al., “Americans Hold Mixed Opinions on AI and Fear its Potential to Disrupt Society, Drive Misinformation, Ipsos (May 4, 2023), https://www.ipsos.com/en-us/americans-hold-mixed-opinions-ai-and-fear-its-potential-disrupt-society-drive-misinformation; Taylor Orth and Carl Bialik, “Majorities of Americans are Concerned About the Spread of AI Deepfakes and Propaganda,” YouGov (Sept. 12, 2023),   https://today.yougov.com/technology/articles/46058-majorities-americans-are-concerned-about-spread-ai?redirect_from=%2Ftopics%2Ftechnology%2Farticles-reports%2F2023%2F09%2F12%2Fmajorities-americans-are-concerned-about-spread-ai.

[4] Christopher Joseph Doss, et al., Deepfakes and Scientific Knowledge Dissemination, RAND Corporation (Aug. 23, 2023), https://www.rand.org/pubs/external_publications/EP70217.html.

[5] Alexander Hecht, Bruce D. Sokler, Christian Tamotsu Fjeld, and Raj Gambhir, “Senators Advance Three Election-Related AI Bills Out of Committee,” Mintz (June 6, 2024), https://www.mintz.com/insights-center/viewpoints/54731/2024-06-06-senators-advance-three-election-related-ai-bills-out.

[6] Public Citizen, Tracker: State Legislation on Deepfakes and Elections (last accessed Aug. 22, 2024), https://www.citizen.org/article/tracker-legislation-on-deepfakes-in-elections/.

[7] Ashley Gold, “Scoop: FEC Won’t Act on AI in Election Ads This Year,” Axios (Aug. 8, 2024), https://www.axios.com/pro/tech-policy/2024/08/08/fec-ai-election-advertising-no-action.

[8] Christine Fernando, “Election Disinformation Targeted Voters of Color in 2020. Experts Expect 2024 to be Worse.,” Associated Press (July 29, 2023), https://apnews.com/article/elections-voting-misinformation-race-immigration-712a5c5a9b72c1668b8c9b1eb6e0038a.

[9] Charlene Richards, “Robocalls to Voters Before 2020 Election Result in $5 Million Fine,” NBC News (June 8, 2023), https://www.nbcnews.com/politics/elections/robocalls-voters-2020-election-result-5-million-fine-rcna8839.

[10] Whitney Tesi, “When Disinformation Becomes ‘Racialized,’” ABC News (Feb. 5, 2022), https://abcnews.go.com/Technology/disinformation-racialized/story.

[11] Terry Nguyen, “The Challenge of Combating Fake News in Asian American Communities, Vox (Nov. 27, 2020), https://www.vox.com/identities/21579752/asian-american-misinformation-after-2020.

[12] Fabiola Cineas, “Why It’s Now Illegal for Some Voters With Disabilities to Cast a Ballot, Vox (Apr. 28, 2022), https://www.vox.com/23043567/voters-with-disabilities-voting-barriers-restrictive-laws.

[13] Carl Bialik and Taylor Ort, “Majorities of Americans Are Concerned About The Spread of AI Deepfakes and Propaganda,” YouGov (Sept. 12, 2023), https://today.yougov.com/technology/articles/46058-majorities-americans-are-concerned-about-spread-ai.

[14] Joel Rose and Liz Baker, “6 in 10 Americans Say Democracy is in Crisis as the ‘Big Lie’ Takes Rot,” NPR (Jan. 3, 2022), https://www.npr.org/2022/01/03/1069764164/american-democracy-poll-jan-6.

[15] “Under the Microscope.” Common Cause, (September 20, 2023), https://www.commoncause.org/resource/under-the-microscope/.

[16] AI and the Future of Our Elections: Hearing Before The Senate Committee on Rules & Administration, 118th Cong. (2023)(Statement of Maya Wiley),  https://www.rules.senate.gov/hearings/ai-and-the-future-of-our-elections.

[17] Aliya Bhatia, “Election Disinformation in Different Languages is a Big Problem in the U.S.,” Center for Democracy and Technology (Oct. 18, 2022), https://cdt.org/insights/election-disinformation-in-different-languages-is-a-big-problem-in-the-u-s/.

[18] 47 U.S.C. § 315. Equal time does not apply to bona fide news coverage. 47 CFR § 73.1941.

[19] 47 CFR § 25.702(a)-(b).

[20] 47 U.S.C. § 335(a); 47 CFR § 25.701(b)-(d).

[21] 47 U.S.C. § 315(c); 47 CFR § 76.205.

[22] NPRM at para. 4.

[23] NPRM at para 5. These rules have been addressed in numerous court decisions and, for the most part, have been upheld. See, e.g., CBS, Inc. v. FCC, 453 U.S. 367, 395 (1981); Columbia Broadcasting Sys. v. Democratic Nat’l Comm., 412 U.S. 94 (1973); Farmers Educ. & Co-op. Union v. WDAY, Inc., 360 U.S. 525 (1959); Loveday v. FCC, 707 F.2d 1443 (D.C. Cir. 1983). See also Kerry L. Monroe, Unreasonable Access: Disguised Issue Advocacy and the First Amendment Status of Broadcasters, 25 Fordham Intell. Prop. Media & Ent. L.J. 117, 131-144 (2014), available at: https://ir.lawnet.fordham.edu/iplj/vol25/iss1/3 (discussing background of political programming rules).

[24] NPRM at para. 29.

[25] See Turner Broadcasting System, Inc. v. FCC, 520 U.S. 180, 189 (1997); Turner, 512 U.S. at 637.

[26] See U.S. v. Playboy Entertainment Group, Inc., 529 U.S. 803, 813 (2000).

[27] 47 C.F.R § 73.1212.

[28] 47 U.S.C. § 315(e); 47 CFR § 73.1943.

[29] McConnell v. FEC, 540 U.S. 93 at 243 (2003).

[30] Id. at 240-46.

[31] Meredith McGehee, Who’s Behind That Political Ad? at 10-11 Campaign Legal Center (2016), https://campaignlegal.org/document/whos-behind-political-ad.

[32] Dietram A. Scheufele, & David Tewksbury, Framing, Agenda Setting, and Priming: The Evolution of Three Media Effects Models, 57 J. COMM. 9 (2007).

[33] For example, Citizens United held that the disclaimer and disclosure provisions of the BCRA did not violate the First Amendment. Citizens United v. FEC, 558 U.S. 310, 371 (2010). See also Millicent Usoro, A Medium-Specific First Amendment Analysis on Compelled Campaign Finance Disclosure on the Internet, 71 FCLJ 299 (2019), http://www.fclj.org/wp-content/uploads/2019/03/71.2—Article-5—Millicent-Usoro.pdf.

[34] Daniel Victor, Campaign Ads: How To Free the Files at Your TV Station, ProPublica (2012), https://www.propublica.org/article/campaign-ads-how-to-free-the-files-at-your-tv-station1.

[35] Center for Public Responsibility, CRP calls on FCC to include cable, satellite and radio in political ad filing requirements (2016), https://www.opensecrets.org/news/2016/01/crp-calls-on-fcc-to-include-cable-satellite-and-radio-in-political-ad-filing-requirements/.

[36] This is one example of results from local TV station WJLA-TV in Washington, DC: https://publicfiles.fcc.gov/tv-profile/WJLA-TV/political-files/2024/federal/us-senate/31128246-aee2-691a-1d5b-258227391fb4.

[37] The Open Secrets FCC Ad Data page has not been updated since 2020: https://www.opensecrets.org/ad-data.