Politics

Political consultant indicted over fake Biden robocall in New Hampshire

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram



The Democratic political consultant who admitted for using a deepfake of President Biden’s voice in a New Hampshire primary robocall earlier this year was indicted on Wednesday.

Steve Kramer, who said he created the robocall to warn about the dangers of artificial intelligence (AI), has been accused of bribery, intimidation and repression, according to local news outlet WMUR, which first reported the accusations.

The call was the first known use of deepfake technology in US politics, triggering a wave of calls to regulate the use of AI in elections. Biden’s fake voice on the call encouraged thousands of New Hampshire primary voters to stay home and “save” their votes.

“This is a way to make a difference, and I did,” Kramer told NBC News in February. “For $500, I got about $5 million in equity, whether it was media attention or regulatory action.”

The consultant previously worked for Rep. Dean Phillips’ (D-Minn.) presidential campaign, which was suspended in March, although he said Phillips’ team was not connected to or aware of his robocall effort. He also supported efforts to regulate the technology.

“With an investment of just $500, anyone could replicate my intentional call,” Kramer said in a statement in February. “Immediate action is needed across all regulatory bodies and platforms.”

The Federal Communications Commission (FCC) announced Wednesday that it will consider requiring political advertisers to disclose the use of AI on television and radio.

“As artificial intelligence tools become more accessible, the commission wants to ensure that consumers are fully informed when the technology is used,” FCC Chairwoman Jessica Rosenworcel said in a statement Wednesday.

The FCC banned the use of artificial intelligence in robocalls earlier this year following Kramer’s effort in New Hampshire.

AI is “increasing” threats to the election system, technology policy strategist Nicole Schneidman told The Hill in March. “Disinformation, voter suppression – what generative AI is really doing is making it more efficient to be able to execute those threats.”

AI-generated political ads have already invaded the space with the 2024 elections. Last year, the Republican National Committee released an entirely AI-generated ad aimed at showing a dystopian future under a second Biden administration. It employed fake but realistic photos, showing boarded-up storefronts, armored military patrols on the streets, and waves of immigrants creating panic.

In India’s elections, recent AI-generated videos misrepresenting Bollywood stars criticizing the prime minister exemplify a trend that technology experts say is emerging in democratic elections around the world.

Senators Amy Klobuchar (D-Minn.) and Lisa Murkowski (R-Alaska) also introduced a bill earlier this year that would require similar disclosures when AI is used in political ads.

The Hill has reached out to the New Hampshire Attorney General’s office for comment.



This story originally appeared on thehill.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,159

Don't Miss

Venice Biennale titled ‘Foreigners Everywhere’ LGBTQ+ platforms, foreign and indigenous artists

Venice, Italy — Outsider, queer and indigenous artists are getting

Who is Jessie James Decker’s husband Eric Decker?

JESSIE James Decker has mutual friends to thank for her