News

AI’s Impact on Elections: Assessing the Growing Threats and Solutions

News Category: Media

AI’s Impact on Elections: Assessing the Growing Threats and Solutions

By Matt Cohen | Democracy Docket

CLICK TO READ FULL ARTICLE

Arizona Secretary of State Adrian Fontes doesn’t speak German. But if you tuned to his recent interview with PBS at the wrong time, you might not know that. Midway through the 18-minute interview the feed cuts to a video of Fontes speaking in German.

“For a malicious adversary, neither you nor your target need to be able to speak a language to generate content for it,” he said in perfect, fluent German.

If it weren’t for a disclaimer, it would be nearly impossible to tell that it wasn’t actually Fontes speaking. It was a deepfake of Fontes produced with his permission to demonstrate just how alarmingly lifelike and manipulative AI-generated content has evolved.

Over the past few election cycles, coordinated disinformation campaigns from domestic and foreign adversaries emerged as one of the biggest threats to election security. In the 2024 election cycle, those threats are higher than ever thanks to the evolution of generative artificial intelligence.

“Generative AI is yet another tool in the arsenal of bad guys, but the actual weaponry is misinformation, disinformation, and malinformation,” Fontes told Democracy Docket in a recent panel on AI in elections. “And what generative AI does is it opens up this Pandora’s box of new tools to sort of broaden and deepen the way that these attacks against our democracy can be levied.”

Along with the NewDEAL Forum — a national network of pro-growth progressive state and local elected officials that work together to address issues like voting rights and climate change — a coalition of secretaries of state are working together to develop tools and best practices to help raise awareness of generative AI disinformation this election season.

The effort is led by Fontes — along with Michigan Secretary of State Jocelyn Benson and Minnesota Secretary of State Steve Simon — to prepare election workers and voters in their states to be vigilante and savvy against the various generative AI threats. Those campaigns can take many forms, from that deepfake video of Fontes to a recent scheme in New Hampshire involving thousands of AI-generated robocalls using President Joe Biden’s voice urging voters not to vote in the state’s January primary election.

A big part of this effort, according to NewDEAL CEO Debbie Cox Bultan, is simply raising awareness of the issue to voters. NewDEAL put out a poll in Arizona in April that found that only 41% of respondents knew anything about AI and elections. “That just tells me that we have a lot of work to do to make sure that people are aware of what might happen,” she told Democracy Docket.

She also pointed to the Biden/New Hampshire incident as an example of how the effort to raise awareness of AI threats in elections is working. “They did a great job. They had a plan, they got the information out about the misinformation that was quickly spreading across the board,” she said. “That’s a big part, to me, of what needs to happen going forward. To have those plans in place to respond when an incident like that happens.”

In March, NewDEAL put out a document that outlines best practices for election officials — from secretaries of state to county election workers — to mitigate the negative impacts of AI in the upcoming election. The document also suggests some legislation that state politicians can pass to help protect democracy from AI threats. The suggestions include more short-term practices, like public information campaigns about AI threats and rapid response protocols.

“We’re focused on making sure that our adversaries can’t use this to turbocharge misinformation that we anticipate will already be a big part of our election cycle,” Benson told Democracy Docket. “And we’re proactively trying to prepare citizens to not be fooled by misinformation that is turbocharged through AI, knowing how to spot deepfakes and the like.”

In Arizona, Fontes’ office regularly leads roleplaying and tabletop exercises to train election workers to spot and quickly respond to various generative AI threats during election season — like the deepfake videos of himself that he’s commissioned. It’s a practice that Benson’s office has employed in Michigan. “We’re also working with law enforcement and have developed tabletop exercises, not unlike what Arizona has done, to train our state clerks, law enforcement and first responders to rapidly respond to any issues that may occur around voting on or before election day, and also to be prepared to stop the impact or negative impact of AI from spreading.”

While these exercises are good for the short-term, there’s a need to address the threat of AI through legislation. As NewDEAL’s document suggests, state legislatures “should pass laws regarding the use of AI for campaigns, requiring the clear labeling of certain kinds of AI-generated campaign material.”

While most states are introducing legislation to regulate the use of AI — at least 40 in the 2024 legislative session alone — only 18 have laws that specifically address election-related AI restriction. Michigan is one of the few states with such laws. Last November, Michigan passed a law that makes it a crime for someone to knowingly distribute AI-produced election material with an intent of harming a campaign. It’s required by law for political advertisements to have a disclaimer when it uses AI-generated content.

“That disclosure requirement also helps equip citizens with the knowledge of how to be critical consumers of information when they’re seeing different types of information come their way,” Benson said.

While efforts for more long-term solutions is the ongoing goal for Fontes, Benson and NewDEAL, with about 100 days to go until the election the priority is making sure election workers are prepared for AI threats that could pop up at any time.

“We’re focused on … continuing to share best practices, convening people so that we can talk about what they’re seeing on the ground and how they’re reacting and what people can learn from those experiences,” Bultan said.