Politics

A parody ad shared by Elon Musk clones Kamala Harris’ voice, raising concerns about AI in politics

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


NEW YORK — A video that uses a artificial intelligence voice cloning tool to imitate the voice of Vice President Kamala Harris saying things she didn’t say is raising concerns about AI’s power to deceive with about three months to go until Election Day.

The video gained attention after technology billionaire Elon Musk shared on its social media platform X on Friday, without explicitly noting that it was originally released as a parody.

On Sunday night, Musk clarified that the video was satire, pinning the original creator’s post to his profile and using a pun to make it clear that parody is not a crime.

The video uses many of the same visuals as a real ad that Harris, the likely Democratic presidential candidate, announced the launch of his campaign. But the fake ad swaps out Harris’ narration audio for an AI-generated voice that convincingly impersonates Harris.

“I, Kamala Harris, am your Democratic presidential candidate because Joe Biden finally exposed his senility in the debate,” the AI ​​voice says in the video. She claims Harris is a “diversity hire” because she is a woman and a person of color, and says she doesn’t know “the first thing about running the country.” The video maintains the “Harris for President” brand. He also adds some authentic early clips of Harris.

Mia Ehrenberg, a spokeswoman for the Harris campaign, said in an email to The Associated Press: “We believe the American people want the true freedom, opportunity and security that Vice President Harris is offering; not the fake and manipulated lies of Elon Musk and Donald Trump.”

The widely shared video is an example of how AI-generated images, videos, or audio clips have been used to both mock and mislead about politics as the United States approaches the presidential election. It exposes how, as high-quality AI tools have become more accessibilitySignificant federal action to regulate its use continues to be lacking, leaving the rules guiding AI in politics largely to states and social media platforms.

The video also raises questions about how best to deal with content that blurs the lines of what is considered an appropriate use of AI, especially if it falls into the category of satire.

The original user who posted the video, a YouTuber known as Mr. Reagan, has revealed early on, both on YouTube and on X, that the manipulated video is a parody. However, Musk’s initial post with the video, which had a much wider reach with 130 million views on X, according to the platform, only included the caption “This is amazing” with a laughing emoji.

Over the weekend, before Musk clarified on his profile that the video was a joke, some participants in X’s “community note” feature suggested labeling his post as manipulated. No such label was added to it, even though Musk posted separately about the parody video.

Some online users questioned whether his initial post might violate X Policieswhich states that users “may not share synthetic, manipulated, or out-of-context media that may mislead or confuse people and cause harm.”

The policy has an exception for memes and satire, as long as they do not cause “significant confusion about the authenticity of the media.”

Chris Kohls, the man behind Reagan’s online persona, pointed an AP reporter to a YouTube video he posted Monday in response to the ordeal. In the YouTube video, he confirmed that he used AI to make the fake ad and argued that it was obviously a parody, with or without a label.

Musk endorsed Trump, the former Republican president and current candidate, earlier this month. He did not respond to an emailed request for comment.

Two experts specializing in AI-generated media analyzed the audio from the fake ad and confirmed that much of it was generated using AI technology.

One of them, University of California, Berkeley digital forensics expert Hany Farid, said the video shows the power of generative AI and deepfakes.

“The AI-generated voice is very good,” he said in an email. “Even though most people don’t believe it’s Vice President Harris’ voice, the video is much more powerful when the words are in her voice.”

He said generative AI companies that make voice cloning and other AI tools available to the public should do better to ensure their services are not used in ways that could harm people or democracy.

Rob Weissman, co-president of the advocacy group Public Citizen, disagreed with Farid, saying he thought many people would be fooled by the video.

“I’m sure most people who look at this don’t assume it’s a joke,” Weissman said in an interview. “The quality is not great, but it is good enough. And precisely because it feeds into pre-existing themes that have circulated around it, most people will believe it is real.”

Weissman, whose organization has been advocating for Congress, federal agencies and states to regulate generative AI, said the video is “the kind of thing we’ve been warning about.”

Other AI generative deepfakes in the US and elsewhere have attempted to influence voters with misinformation, humor or both. In Slovakia in 2023, fake audio clips impersonated a candidate discussing plans to rig an election and raise the price of beer days before the vote. In Louisiana in 2022, a political action committee satirical advertisement superimposed the face of a Louisiana mayoral candidate onto an actor portraying him as an underachieving high school student.

Congress has not yet passed legislation on AI in politics, and federal agencies have taken only limited action, leaving most existing US regulation to the states. More than a third of states have created their own laws regulate the use of AI in campaigns and elections, according to the National Conference of State Legislatures.

In addition to X, other social media companies have also created policies regarding synthetic and manipulated media shared on their platforms. Users of the video platform YouTube, for example, must reveal whether they used generative artificial intelligence to create videos or facial suspension.

___

The Associated Press receives support from several private foundations to improve its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. AP is solely responsible for all content.



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss

Judge strikes down Florida ban on gender-affirming care

Judge strikes down Florida ban on gender-affirming care

A Florida law that bans gender-affirming health care for transgender
What did we learn from Man Utd’s US tour?

What did we learn from Man Utd’s US tour?

Manchester United triggered a one-year extension to Erik ten Hag’s