News

Election laws fighting AI deepfakes need to be targeted and adaptable, report says

News Category: Media

Election laws fighting AI deepfakes need to be targeted and adaptable, report says

By Chris Teale, Route Fifty

CLICK TO READ THE FULL ARTICLE

A robocall in New Hampshire earlier this year purportedly featuring President Joe Biden urging voters to sit out the primary prompted a state investigation and a federal cease and desist letter. And, as of a few weeks ago, it is now also the subject of a lawsuit from several Granite State voters.

The suit, filed by the League of Women Voters, the group’s New Hampshire chapter and three voters in the state, accuses those behind the robocall of “intimidating, threatening, or coercing, or attempting to intimidate, threaten, or coerce [voters], into not voting in the New Hampshire Primary.”

It further claims that the defendants—political consultant Steve Kramer and two telecom companies, Lingo Telecom and Life Corporation—“orchestrated a deceitful and malicious scheme, bolstered by artificial intelligence and caller ID spoofing” to suppress the vote. The New Hampshire Department of Justice declined to comment on the lawsuit or the case, saying only that its investigation is “ongoing.”

The brouhaha, though, struck a chord with many observers, who say it is an example of the negative role that AI could play in upcoming elections. The NewDEAL Forum, a progressive nonprofit dedicated to spreading policy ideas at the state and local level, said the state’s swift response to the robocall also provides a key takeaway for state and local officials.

“It was spotted, it was dealt with and so there's a lesson to be learned about how to jump on these things early,” said Debbie Cox Bultan, NewDEAL’s CEO. “State and local leaders need tools, and the public needs tools to help deal with this stuff.”

In a recent report, the group suggests resources available that can be used to combat the threat AI may pose, including from generative AI tools like ChatGPT and others that produce images. It also recommends steps states can take.

To address the use of AI in voter suppression, like through providing false information about voting hours and locations or incorrect biographical details about a candidate, the NewDEAL report recommends state policymakers pass laws requiring “clear labeling” of AI content in campaign ads and materials. The group further urges lawmakers to regulate AI-powered chatbots to ensure they are “not misleading voters.”

On a more practical level, NewDEAL suggests that officials run tabletop exercises to role-play various scenarios that could unfold during election season, and ensure they have rapid response capabilities in place for situations similar to the one in New Hampshire.

Public information campaigns are also critical, the group said, in educating voters about generative AI, especially those in vulnerable communities. Working with trusted leaders in those places, especially faith-based leaders, local businesses and others, can help “cut through the noise” and find accurate information, according to the report.

“Voter suppression and voter confusion is nothing new,” Cox Bultan said. “But there are differences with AI and the potential to spread so much more confusion about what's real and not real.”

Already, some states have passed laws requiring that political advertisements generated wholly or partly using AI must include a statement disclosing the use of the technology. Michigan was among the first to pass such a law late last year. It also defined AI under state campaign finance laws for the first time, and made it a crime to knowingly distribute AI-generated content for the purpose of harming a candidate’s reputation or electoral prospects in an election occurring within 90 days.

The NewDEAL report specifically recommends that laws that regulate AI include clearly defining it, ensuring legislation covers all synthetic content, having mandatory disclaimers and ensuring that candidates can obtain injunctions against harmful material.

Guaranteeing legislation can adapt to AI’s evolution is crucial, Cox Bultan added. The report notes the technology’s “dynamic nature” and its “exponential growth potential,” meaning that lawmakers need to be able to regularly reevaluate their laws and policies.

It is also key that laws governing AI be focused on the new technology and have “clear intent,” so that they cannot be used to stifle other areas of political speech like satire and so that they do not run afoul of the First Amendment. Cox Bultan said legislators must be “smart, targeted and thoughtful” to make sure there are no constitutional issues with any new regulations.

The New Hampshire state House advanced a bill last week that takes on AI in political ads. In addition to requiring disclosures explaining that an ad’s image, video or audio “has been manipulated or generated by artificial intelligence technology and depicts speech or conduct that did not occur,” the bill also makes exemptions for satire or parody.

But Cox Bultan cautioned that transparency alone is not enough and that lawmakers cannot just rely on the public to spot disclosures on the use of AI. There needs to be “teeth” in the form of enforcement mechanisms, she said, as a way to deter bad actors from using AI for nefarious means.

“Transparency is a word that comes up a lot when we're having these conversations about AI, whether it's in elections or in other places,” she said. “Certainly that's part of the solution. It just can't be the whole solution.”

While the report primarily deals with the threat posed, its authors recognize the “upside potential” of AI, which, for example, could include the ability to quickly translate campaign literature into other languages.

Cox Bultan said she is “cautiously optimistic” that leaders are taking the issue seriously.

“The fight for democracy is an ongoing one,” she said, “and this is just one more chapter in that book.”