Policymakers are working furiously ahead of the November election to address the challenges that artificial intelligence and deepfakes pose in political advertising.
Last week, the Federal Communications Commission announced it would move forward with a proposal to require that TV and radio advertisements disclose on air when AI is used. The timeline for the rule’s implementation is unclear, but it is expected to go into effect by November.
“There’s too much potential for AI to manipulate voices and images in political advertising to do nothing,” FCC Chairwoman Jessica Rosenworcel said in a statement.
The proposal didn't receive unanimous support. Commissioner Brendan Carr said in a statement that the FCC’s proposal, which will be subject to a vote later this year, is a “recipe for chaos” that can “only muddy the waters.” He also pointed to a recent letter from Federal Elections Commission Chair Sean Cooksey, who said the FCC was overstepping its jurisdiction.
“Suddenly, Americans will see disclosures for ‘AI-generated content’ on some screens but not others, for some political ads but not others, with no context for why they see these disclosures or which part of the political advertisement contains AI,” Carr said. “Far from promoting transparency, the FCC’s proposed rules would mire voters in confusion, create a patchwork of inconsistent rules, and encourage monied, partisan interests to weaponize the law for electoral advantage.”
The rule is one of many efforts in recent weeks to get ahead of the potential threat deepfakes pose. State and local election offices have already seen how AI-generated content could be used to spread misinformation and disinformation. President Joe Biden has already been the subject of much AI-generated content, including a video on Facebook that made it look like he inappropriately touched his granddaughter’s chest after they voted in the 2022 midterms.
“It's becoming more and more difficult as a consumer of content to know what's real and what's not,” Ricky Hatch, clerk and auditor of Weber County, Utah, said during a recent meeting of the National Association of Election Officials’ Committee on Ethics in Practice. “In the end, we may have a citizenry, including us, where you just simply can't believe anything: You watch a video, you just can't believe it. And we have to figure out as a society, how do we get past that?”
Already, several state legislatures have passed laws requiring disclosure of AI’s use in elections and campaigns, with only Minnesota and Texas outright banning it. More than three dozen states are in various stages of considering similar disclosure bills.
Meanwhile, Louisiana Gov. Jeff Landry went a different route from his peers and vetoed a bill in late June that would have banned the distribution and transmission of campaign materials that had "been created or intentionally manipulated to create a realistic but false image, audio, or video with the intent to deceive a voter or injure the reputation of a known candidate in an election." In his veto message, Landry said the legislation “creates serious First Amendment concerns as it relates to emerging technologies.”
“The law is far from settled on this issue, and I believe more information is needed before such regulations are enshrined into law,” Landry continued. Instead, he argued that a proposed joint legislative committee to study AI regulations is the best way to proceed. That bill has passed both chambers of the legislature.
Local leaders are making moves on AI, too. During the National Association of Election Officials’ meeting, Bruce Elfant, the tax assessor-collector and voter registrar of Travis County, Texas, pointed to a recent op-ed he penned warning residents of the “serious challenges” misinformation poses to democracy. Marion County, Florida, Supervisor of Elections Wesley Wilcox said during the same meeting that his office will be “prebunking” obvious misinformation to stop it spreading.
The NewDEAL Forum, a center-left nonprofit dedicated to spreading policy ideas at the state and local level, announced this month that it had launched an AI task force, with one of its key focus areas to address the technology’s role in elections and how to combat malicious activity.
New York Assemblymember Alex Bores, a co-chair of the NewDEAL task force, said Slovakia’s recent election should offer a cautionary tale. A leaked recording of a candidate pledging to rig the election—and raise the cost of beer—was found to be fake, after it entered the public sphere.
Bores’ legislation requiring the disclosure of AI’s use in ads was added to the New York state budget, as well as the ability for candidates portrayed in a deepfake ad to sue to have it blocked. He said during the launch of the task force that proving something is true is a more sustainable path forward than saying something is fake.
“One underrated effect of being able to pull down deepfakes is that it gives a more direct path to proving what is actually true,” he said. “If there is a true video out there and a candidate says, ‘Oh no, that's a deepfake,’ now in New York, we can say, ‘OK. Sue. Prove it.