Politics

FCC will consider AI-generated political ad disclosures

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram



The Federal Communications Commission (FCC) introduced a measure on Wednesday This would require political ads to tout the use of artificial intelligence (AI) software, in what could be the federal government’s first foray into regulating the technology’s use in politics.

If adopted by the entire commission, broadcast television, radio and cable advertisers would be required to disclose their use of AI technology for voice and image generation, amid concerns that the rapidly advancing technology could be used to misleading voters as the 2024 elections approach. .

“As artificial intelligence tools become more accessible, the commission wants to ensure that consumers are fully informed when the technology is used,” FCC Chairwoman Jessica Rosenworcel said in a statement Wednesday. “Today, I shared a proposal with my colleagues that makes it clear that consumers have a right to know when AI tools are being used in the political ads they see, and I hope they act quickly on this issue.”

Rosenworcel’s proposal would not completely ban the use of AI in political ads. The rule would apply to both candidate ads and issued ads if adopted, according to a document. It would not apply to ads online or shown on streaming services.

The proposal specifically notes the risk of “deepfakes,” AI-generated images and audio intended to imitate a real person. AI skeptics have warned that these digitally created images and audio could mislead voters into believing a candidate did or said something they didn’t actually do.

The use of deepfake voice technology was already banned by the FCC for use in political robocalls earlier this year after a group impersonated President Biden in an attempt to discourage voter turnout in the New Hampshire primary.

The details of what would be expected in AI disclosures were not clarified, with the details left to the commission’s rulemaking process. The FCC would also have to draft a specific definition for AI content, a task that has already delayed regulatory efforts.

AI is “increasing” threats to the election system, technology policy strategist Nicole Schneidman told The Hill in March. “Disinformation, voter suppression – what generative AI is really doing is making it more efficient to be able to execute those threats.”

AI-generated political ads have already invaded the space with the 2024 elections. Last year, the Republican National Committee released an entirely AI-generated ad aimed at showing a dystopian future under a second Biden administration. It employed fake but realistic photos, showing boarded-up storefronts, armored military patrols on the streets, and waves of immigrants creating panic.

In India’s elections, recent AI-generated videos misrepresenting Bollywood stars criticizing the prime minister exemplify a trend that technology experts say is emerging in democratic elections around the world.

Senators Amy Klobuchar (D-Minn.) and Lisa Murkowski (R-Alaska) also introduced a bill earlier this year that would require similar disclosures when AI is used in political ads.

The Associated Press contributed.



This story originally appeared on thehill.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,331

Don't Miss