Tech

OpenAI says it has stopped several covert influence operations that abused its AI models

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


OpenAI said it has disrupted five covert influence operations that used its AI models for deceptive activity on the internet. These operations, which OpenAI closed between 2023 and 2024, originated in Russia, China, Iran and Israel and attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said. he said On thursday. “As of May 2024, these campaigns do not appear to have significantly increased engagement or audience reach as a result of our services,” OpenAI said in a statement. report about the operation and added that he worked with people from the technology industry, civil society and governments to isolate these bad actors.

OpenAI’s report comes amid concerns about the impact of generative AI on multiple elections around the world scheduled for this year, including in the US. In its findings, OpenAI revealed how networks of people involved in influence operations used generative AI to generate text and images in much greater volumes than before, and fake engagement using AI to generate fake comments on social media posts.

“Over the last year and a half, there have been a lot of questions about what might happen if influence operations used generative AI,” Ben Nimmo, principal investigator on the Intelligence and Investigations team at OpenAI, told members of the media at a press conference . according to for Bloomberg. “With this report, we really want to start filling in some of the gaps.”

OpenAI said the Russian operation called “Doppelganger” used the company’s models to generate headlines, convert news articles into Facebook posts and create comments in multiple languages ​​to undermine support for Ukraine. Another Russian group used OpenAI models to debug the code of a Telegram bot that posted brief political comments in English and Russian, targeting Ukraine, Moldova, the US and the Baltic States. The Chinese network “Spamouflage,” known for its influencer efforts on Facebook and Instagram, used OpenAI models to research social media activity and generate text-based content in multiple languages ​​across multiple platforms. The Iranian “International Virtual Media Union” has also used AI to generate content in multiple languages.

OpenAI’s disclosure is similar to what other technology companies do from time to time. On Wednesday, for example, Meta launched its latest report about coordinated inauthentic behavior, detailing how an Israeli marketing company used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,118

Don't Miss

Elon Musk may have more to lose in a second Trump term than he has to gain

Elon Musk has wielded his power as a political influencer

With more voters and security risks than ever, Maricopa County plans new election center

Maricopa County election officials will count votes this year in