Tech

Meta says it has identified misleading content likely generated by AI

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


Meta said on Wednesday (29) that it found content “likely generated by artificial intelligence” being used in a misleading way on its Facebook and Instagram platforms, including comments praising the way Israel conducts its war in Gaza published underneath posts from organizations global news and North American parliamentarians.

The social media company, in a quarterly security report, said the accounts impersonated Jewish students, African-Americans and other related citizens, targeting audiences in the United States and Canada. The campaign was attributed to a Tel Aviv-based political marketing company called STOIC.

STOIC did not respond to a request for comment on the allegations at first.

While Meta has found basic AI-generated profile photos in influence operations since 2019, this was the first report to reveal the use of text-based generative AI technology since it emerged in late 2022.

Researchers fear that generative AI, which can quickly and cheaply produce human-like text and audio, could lead to more efficient disinformation campaigns and influence elections.

In a conference call with the press, Meta security executives said they have removed the campaign from Israel and that they do not think new AI technologies have impeded their ability to disrupt influence networks, which are coordinated attempts to publish messages.

Executives said they had not seen AI-generated images of politicians realistic enough to be mistaken for authentic photos.

“There are several examples on these networks of how they likely use generative AI tools to create content. Perhaps this will give them the ability to do it faster or with more volume. But it didn’t really impact our ability to detect them,” said Meta’s head of threat investigations, Mike Dvilyanski.

The report highlighted six covert influence operations that Meta disrupted in the first quarter.

In addition to the STOIC network, Meta closed an Iran-based network focused on the conflict between Israel and Hamas, although it did not identify the use of generative AI in that campaign.

Meta and other tech giants have been focusing on how to address the potential inappropriate use of new artificial intelligence technologies, especially in elections.

Researchers found examples of image generators from companies like OpenAI and Microsoft producing photos with election-related misinformation, despite these companies having policies against this type of content.

Companies have emphasized digital labeling systems to mark AI-generated content as it is created, but the tools don’t work on text and researchers have doubts about their effectiveness.

The European Union elections in early June and the United States in November will be fundamental tests for the Meta’s defenses.



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Morning bidding: Labor in focus

July 5, 2024
5 views
2 mins read
A Day Ahead View in US and Global Markets by Mike Dolan As US markets return from the Independence Day holiday to

Related

More

1 2 3 6,253

Don't Miss