Tech

OpenAI: Russia, China and Israel use it for influence campaigns

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


OpenAI has identified and removed five covert influence operations based in Russia, China, Iran and Israel that used its artificial intelligence tools to manipulate public opinion, the company said on Thursday.

On a new report, OpenAI detailed how these groups, some of which are linked to known propaganda campaigns, used the company’s tools for a variety of “deceptive activities.” This included generating social media comments, articles and images in multiple languages, creating names and bios for fake accounts, debugging code, and translating and proofreading text. These networks focused on a range of issues, including defending the war in Gaza and Russia’s invasion of Ukraine, criticizing Chinese dissidents, and commenting on policies in India, Europe and the USA in their attempts to influence public opinion. While these influence operations target a wide range of online platforms, including X (formerly known as Twitter), Telegram, Facebook, Medium, Blogspot and other sites, none have managed to engage a substantial audience,” according to analysts at OpenAI.

The report, the first of its kind released by the company, comes amid global concerns about the potential impact AI tools could have on the more than 64 elections taking place around the world this year, including the U.S. presidential election in November. . In one example cited in the report, a post from a Russian Telegram group said: “I’m sick and tired of these brain-damaged idiots playing games while Americans suffer. Washington needs to get its priorities right or they will feel the full force of Texas!”

The examples listed by OpenAI analysts reveal how foreign actors appear to be using AI tools for the same types of online influence operations they have been carrying out for a decade. They focus on using fake accounts, comments, and articles to shape public opinion and manipulate political outcomes. “These trends reveal a threat landscape marked by evolution, not revolution,” wrote Ben Nimmo, principal investigator on the Intelligence and Investigations team at OpenAI, in the report. “Threat actors are using our platform to improve their content and work more efficiently.”

See more information: Hackers can use ChatGPT to target 2024 elections

OpenAI, which makes ChatGPT, says it now has more than 100 million weekly active users. Its tools facilitate and speed up the production of a large volume of content, and can be used to mask language errors and generate false engagement.

One of the Russian influence campaigns shut down by OpenAI, dubbed “Bad Grammar” by the company, used its AI models to debug code to run a Telegram bot that created brief political commentary in English and Russian. The operation targeted Ukraine, Moldova, the US and the Baltic States, the company claims. Another Russian operation known as “Doppelganger,” which the US Treasury Department has linked to the Kremlin, used OpenAI models to generate headlines and convert news articles into Facebook posts and create comments in English, French, German, Italian and Polish . A well-known Chinese network, Spamouflage, also used OpenAI tools to search social media activity and generate text in Chinese, English, Japanese and Korean that was posted on several platforms, including X, Medium and Blogspot.

OpenAI also detailed how a Tel Aviv-based Israeli political marketing company called Stoic used its tools to generate pro-Israel content about the war in Gaza. The campaign, dubbed “Zero Zeno”, targeted audiences in the US, Canada and Israel. On Wednesday, Meta, the parent company of Facebook and Instagram, said he had removed 510 Facebook accounts and 32 Instagram accounts linked to the same company. The set of fake accounts, which included accounts posing as African Americans and students in the U.S. and Canada, often responded to prominent figures or media organizations in posts praising Israel, criticizing anti-Semitism on campuses, and denouncing the “Radical Islam”. It appears to have not achieved any significant engagement, according to OpenAI. “Look, it’s not cool how these extremist ideas are, like, messing with the vibe of our country,” says a post from the report.

OpenAI says it is using its own AI-based tools to more effectively investigate and disrupt these foreign influence operations. “The investigations described in the attached report took days, rather than weeks or months, thanks to our tools,” the company said on Thursday. They also noted that despite the rapid evolution of AI tools, human error continues to be a factor. “AI can change the toolkit that human operators use, but it doesn’t change the operators themselves,” said OpenAI. “While it is important to be aware of the ever-changing tools that threat actors use, we must not lose sight of the human limitations that can affect their operations and decision-making.”

More from TIME



This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss

Why Robert F. Kennedy Jr.’s Current Presidential Poll Numbers May Not Hold Up

Why Robert F. Kennedy Jr.’s Current Presidential Poll Numbers May Not Hold Up

WASHINGTON – Independent presidential candidate Robert F. Kennedy Jr. reached
The Timberwolves have tough lessons to learn before they become champions

The Timberwolves have tough lessons to learn before they become champions

DALLAS – High-stakes playoff series are the ultimate mirror, the