The Federal Election Commission has begun a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election, a move advocates say would safeguard voters against a particularly insidious form of election disinformation.
The FEC’s unanimous procedural vote in August advances a petition asking it to regulate ads that use artificial intelligence to misrepresent political opponents as saying or doing something they didn’t — a stark issue that is already being highlighted in the current 2024 GOP presidential primary.
Though the circulation of convincing fake images, videos or audio clips is not new, innovative generative AI tools are making them cheaper, easier to use, and more likely to manipulate public perception. As a result, some presidential campaigns in the 2024 race — including that of Florida GOP Governor Ron DeSantis — already are using them to persuade voters.
The Republican National Committee in April released an entirely AI-generated ad meant to show the future of the United States if President Joe Biden is reelected. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic.
In June, DeSantis’ campaign shared an attack ad against his GOP primary opponent Donald Trump that used AI-generated images of the former president hugging infectious disease expert Dr. Anthony Fauci.
SOS America PAC, which supports Miami Mayor Francis Suarez, a Republican, also has experimented with generative AI, using a tool called VideoAsk to create an AI chatbot in his likeness.
The FEC meeting comes after the advocacy group Public Citizen asked the agency to clarify that an existing federal law against “fraudulent misrepresentation” in campaign communications applies to AI-generated deepfakes.
The panel’s vote shows the agency’s intent to consider the question, but it will not decide whether to actually develop rules governing the ads until after a 60-day public comment window, which is likely to begin next week.
In June, the FEC deadlocked on an earlier petition from the group, with some commissioners expressing skepticism that they had the authority to regulate AI ads. Public Citizen came back with a new petition identifying the fraudulent misrepresentation law and explaining it thought the FEC did have jurisdiction.
A group of 50 Democratic lawmakers led by House Representative Adam Schiff also wrote a letter to the FEC urging the agency to advance the petition, saying, “Quickly evolving AI technology makes it increasingly difficult for voters to accurately identify fraudulent video and audio material, which is increasingly troubling in the context of campaign advertisements.”
Republican Commissioner Allen Dickerson said he remained unconvinced that the agency had the authority to regulate deepfake ads.
“I’ll note that there’s absolutely nothing special about deepfakes or generative AI, the buzzwords of the day, in the context of this petition,” he said, adding that if the FEC had this authority, it would mean it also could punish other kinds of doctored media or lies in campaign ads.
Dickerson argued the law does not go that far, but noted the FEC has unanimously asked Congress for more authority. He also raised concerns the move would wrongly chill expression that’s protected under the First Amendment.
Public Citizen President Robert Weissman disputed Dickerson’s points, arguing that deepfakes are different from other false statements or media because they fraudulently claim to speak on a candidate’s behalf in a way that was convincing to the viewer.
“The deepfake has an ability to fool the voter into believing that they are themselves seeing a person say or do something they didn’t say,” he said. “It’s a technological leap from prior existing tools.”
Weissman said acknowledging deepfakes are fraud solves Dickerson’s First Amendment concerns too — while false speech is protected, fraud is not.
Lisa Gilbert, Public Citizen’s executive vice president, said under its proposal, candidates would also have the option to prominently disclose the use of artificial intelligence to misrepresent an opponent, rather than avoid the technology altogether.
She argued action is needed because if a deepfake misleadingly impugning a candidate circulates without a disclaimer and doesn’t get publicly debunked, it could unfairly sway an election.
For instance, the RNC disclosed the use of AI in its ad, but in small print that many viewers missed. Gilbert said the FEC could set guidelines on where, how and for how long campaigns and parties need to display these disclaimers.
Even if the FEC decides to ban AI deepfakes in campaign ads, it wouldn’t cover all the threats they pose to elections.
For example, the law on fraudulent misrepresentation wouldn’t enable the FEC to require outside groups, like PACs, to disclose when they imitate a candidate using artificial intelligence technology, Gilbert said.
That means it would not cover an ad recently released by Never Back Down, a super PAC supporting DeSantis, that used an AI voice cloning tool to imitate Trump’s voice, making it seem like he narrated a social media post.
It also would not stop individual social media users from creating and disseminating misleading content — as they long have — with both AI-generated falsehoods and other misrepresented media, often referred to as “cheap fakes.”
Congress, however, could pass legislation creating guardrails for AI-generated deceptive content, and lawmakers, including Senate Majority Leader Chuck Schumer, have expressed intent to do so. Several states also have discussed or passed legislation related to deepfake technology.
Daniel Weiner, director of the Elections and Government Program at the Brennan Center for Justice, said misinformation about elections being fraudulently stolen is already a “potent force in American politics.”
More sophisticated AI, he said, threatens to worsen that problem.
“To what degree? You know, I think we’re still assessing,” he said. “But do I worry about it? Absolutely.”