OpenAI Disrupts Influence Operations Linked to China, Russia, and Others


Actors behind these operations used OpenAI tools to generate comments, produce articles, or create fake names or bios for social media accounts.

OpenAI announced that it has disrupted five influence operations from four countries using its artificial intelligence (AI) tools to manipulate public opinion and shape political outcomes across the internet.

The company said on May 30 that these covert influence operations were from Russia, China, Iran, and Israel. Actors behind these operations used OpenAI tools to generate comments, produce articles, or create fake names or bios for social media accounts over the last three months.
The report found that the content pushed by these operations targets multiple ongoing issues, including criticisms of the Chinese regime from Chinese dissidents and foreign governments, U.S. and European politics, Russia’s invasion of Ukraine, and the conflict in Gaza.

However, such operations did not achieve their goals, meaningfully increasing their audience engagement due to the company’s services, OpenAI said in a statement.

The company found trends from these actors using its AI tools, including content generation, mixing old and new between AI-generated materials and other types of content, faking engagement by creating replies for their own social posts, and productivity enhancement like summarizing social media posts.

Pro-Beijing Network

OpenAI said it disrupted an operation from a pro-Beijing Spamouflage disinformation and propaganda network in China. The Chinese operation used the company AI model to seek advice about social media activities, research news, and current events, and generate content in Chinese, English, Japanese, and Korean.

Much of the content generated by the Spamouflage network are topics praising the Chinese communist regime, criticizing the U.S. government, and targeting Chinese dissidents.

Related Stories

China-Linked Influence Operation May Have Manufactured Protests in US: Report
Meta Purges Massive Influence Operation Linked to China’s Law Enforcement

Such content was posted on multiple social platforms, including X, Medium, and Blogspot. OpenAI found that in 2023, the Chinese operation generated articles that claimed that Japan polluted the environment by releasing wastewater from the Fukushima nuclear power plant. Actor and Tibet activist Richard Gere and Chinese dissident Cai Xia are also targets of this network.

The network also used the OpenAI model to debug code and generate content for a Chinese-language website that attacks Chinese dissidents, calling them “traitors.”

Last year, Facebook uncovered links between Spamouflage and Chinese law enforcement, noting that the group has been promoting pro-Beijing campaigns on social media since 2018. The company deleted about 7,700 Facebook accounts and a hundred pages and Instagram accounts involved in influence operations that pushed positive narratives about Beijing and negative comments about the United States and critics of the Chinese regime.

Russia, Israel, and Iran Operations

OpenAI also found two operations from Russia, one of which is Doppelganger. This operation used OpenAI tools to generate comments in multiple languages and post on X and 9GAG. Doppelganger also used AI tools to translate articles in English and French and turn them into Facebook posts.

The company said the other is a previously unreported Russian network, Bad Grammar, which operates mainly on Telegram and focuses on Ukraine, Moldova, the United States, and the Baltic States. It used OpenAI tools to debug code for a Telegram bot that automatically posts information on this platform. This campaign generated short political comments in Russian and English about the Russia-Ukraine war and U.S. politics.

In addition, the ChatGPT parent company found one operation from Israel relating to Tel Aviv-based political marketing firm STOIC and the other from Iran. Both used ChatGPT to generate articles. Iran published the content on a website related to the Iran threat actor website, while Israel posted its comments on multiple platforms, including X, Facebook, and Instagram.

In the meantime, on May 29, Facebook released a quarterly report, revealing that “likely AI-generated” deceptive content has been posted on its platform. The report indicated that Meta disrupted six covert influence operations in the first quarter, including one Iran-based network and another from the Israeli firm STOIC.

OpenAI launched ChatGPT to the public in November 2022. The chatbot swiftly became a global phenomenon, attracting hundreds of millions of users impressed by its ability to answer questions and engage across a wide array of topics.

OpenAI captured global tech attention last year when its board abruptly fired CEO Sam Altman. The move drew worldwide backlash, forcing the board to reinstate him and resulting in the resignation of most board members and the formation of a new board.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *