Adrian Perkins was running for re-election as mayor of Shreveport, Louisiana, when he was surprised by a major campaign blow.
The satirical TV commercial, paid for by a rival political action committee, used artificial intelligence to portray Perkins as a high school student who is called to the principal’s office. Instead of criticizing him for cheating on a test or fighting, the principal criticized Perkins for failing to keep communities safe and create jobs.
The video superimposed Perkins’ face onto the body of an actor playing him. Although the ad was labeled as created with “deep learning computer technology,” Perkins said it was powerful and resonated with voters. He didn’t have enough money or campaign staff to counteract that, and he thinks that was one of the many reasons he lost the 2022 race. A representative for the group behind the ad did not respond to a request for comment.
“One hundred percent the deepfake ad affected our campaign because we were a place with fewer votes and fewer resources,” said Perkins, a Democrat. “You had to choose where to put your efforts.”
While such attacks are the basis of violent political campaigning, the ad targeting Perkins was notable: it is believed to be one of the first examples of an AI deepfake deployed in a US political race. candidates in numerous state and local races this year as generative AI has become more widespread and easier to use.
The technology – which can do everything from simplify mundane campaign tasks to create fake images, videos or audio – has already been deployed in some national races across the country and spread much more widely in elections around the world. Despite its power as a tool to deceive, efforts to regulate it have been fragmented or delayed, a gap that could have a greater impact on lower-visibility electoral disputes.
Artificial intelligence is a double-edged sword for candidates running such campaigns. Cheap and easy-to-use AI models can help them save time and money on some of their daily tasks. But they often don’t have the staff or expertise to combat AI-generated falsehoods, raising fears that a last-minute deepfake could fool enough voters to tip elections decided by narrow margins.
“AI-enabled threats affect fierce contests and low-key competitions, where small changes matter and where there are often fewer resources to correct misleading stories,” said Josh Lawson, director of AI and democracy at the Aspen Institute.
Some local candidates have already faced criticism for deploying AI in deceptive ways, from a Republican state Senate candidate in Tennessee who used an AI headshot to appear thinner and younger, to Philadelphia Democratic Sheriffwhose re-election campaign promoted fake news generated by ChatGPT.
One challenge in separating fact from fiction is the decline of local media, which in many places has meant much less coverage of candidates running for public and local office, especially reports that investigate candidates’ backgrounds and the workings of their campaigns. A lack of familiarity with the candidates could make voters more open to believing false information, said U.S. Sen. Mark Warner of Virginia.
The Democrat, who has worked extensively on AI-related legislation as chairman of the Senate Intelligence Committee, said AI-generated disinformation is easier to detect and combat in high-profile disputes because it is under greater scrutiny. When an AI-generated robocall impersonated President Joe Biden to discourage voters from going to the polls in the New Hampshire primary this year, the matter was quickly reported in the media and investigated, resulting in serious consequences to the players behind it.
More than a third of states have passed laws regulating artificial intelligence in politics, and legislation aimed specifically at combating election-related deepfakes has been received bipartisan support in every state wherever you went, according to the nonprofit consumer advocacy group Public Citizen.
But Congress has not yet acteddespite several bipartisan groups of lawmakers proposing such legislation.
“Congress is pathetic,” said Warner, who said he was pessimistic about Congress passing any legislation that would protect elections from AI interference this year.
Travis Brimm, executive director of the Democratic Association of Secretaries of State, called the specter of AI disinformation in election races an evolving issue in which people are “still working to figure out the best path forward.”
“This is a real challenge, which is why we’ve seen Democratic secretaries jump in to address it and pass real legislation with real penalties around AI abuse,” Brimm said.
A spokesperson for the Republican Secretaries of State Committee did not respond to the AP’s request for comment.
While experts and lawmakers worry about how generative AI attacks could skew an election, some candidates for state or local office have said AI tools have proven invaluable to their campaigns. Powerful computer systems, software, or processes can emulate aspects of human work and cognition.
Glenn Cook, a Republican running for a state legislative seat in southeast Georgia, is less well-known and has far less campaign money than the incumbent he will face in Tuesday’s runoff. So he invested in a digital consultant who creates much of his campaign content using low-cost, publicly available generative AI models.
On its website, AI-generated articles are peppered with AI-generated images of community members smiling and chatting, none of which actually exist. AI-generated podcast episodes use a cloned version of your voice to narrate your political stances.
Cook said he reviews everything before it becomes public. The savings – both in time and money – allowed him to knock on more doors in the district and participate in more in-person campaign events.
“My wife and I have done 4,500 doors here,” he said. “It frees you up to do a lot of things.”
Cook’s opponent, Republican state Rep. Steven Sainz, said he believes Cook “hides behind what amounts to a robot rather than authentically communicating his views to voters.”
“I am not basing it on artificially generated promises, but on real-world results,” Sainz said, adding that he is not using AI in his own campaign.
Republican voters in the district weren’t sure what to make of the use of AI in the race, but said they cared most about the candidates’ values and the campaign’s reach. Patricia Rowell, a retired Cook voter, said she likes the fact that he was in her community three or four times during the campaign, while Mike Perry, a freelance Sainz voter, said he felt a more personal connection from Sainz.
He said the expanded use of AI in politics is inevitable, but wondered how voters would be able to differentiate between what is true and what is not.
“It’s free speech, you know, and I don’t want to discourage free speech, but it all comes down to the integrity of the people putting it out there,” he said. “And I don’t know how you regulate integrity. Is very difficult.”
Digital companies that market AI models for political campaigns told the AP that most AI use in local campaigns so far is minimal and designed to increase efficiency in tedious tasks like analyzing survey data or writing social media copy. that meet a certain word limit.
Political consultants are increasingly engaging with AI tools to see what works, according to a new report from a team led by researchers at the University of Texas at Austin. More than 20 political actors across the ideological spectrum told researchers they were experimenting with generative AI models in this year’s campaigns, although they also feared that less scrupulous actors might be doing the same.
“Local-level elections will be much more challenging because people will lash out,” said Zelly Martin, lead author of the report and senior researcher at the university’s Center for Media Engagement. “And what resources do they have to react, unlike Biden and Trump, who have much more resources to defend themselves from attacks?”
There are huge differences in personnel, money and experience between election campaigns — for state legislator, mayor, school board or any other local office — and races for federal office. While a local campaign may have just a handful of staffers, competitive U.S. House and Senate campaigns may have dozens, and presidential operations may number in the thousands by the end of the campaign.
Biden’s campaigns and former president Donald Trump both are experimenting with AI to improve fundraising and voter outreach efforts. Mia Ehrenberg, a spokeswoman for the Biden campaign, said they also have a plan to debunk AI-generated misinformation. A spokesperson for the Trump campaign did not respond to AP’s questions about its plans to address AI-generated misinformation.
Perkins, the former mayor of Shreveport, had a small team that decided to ignore the attack and continue campaigning when the deepfake of him being taken to the principal’s office made it onto local TV. He said he saw the deepfake ad against him as a typical dirty trick of the time, but the rise of AI in just two years since his campaign made him realize the power of technology as a tool to deceive voters.
“In politics, people are always going to push the envelope a little to be effective,” he said. “We had no idea how significant it would be.”
___
Burke reported from San Francisco, Merica from Washington, and Swenson from New York.
___
This story is part of an Associated Press series, “The AI Campaign,” exploring the influence of artificial intelligence on the 2024 election cycle.
___ The Associated Press receives support from several private foundations to improve its explanatory coverage of elections and democracy, and from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters, and funded coverage areas at AP.org