4

OpenAI says Russian and Israeli groups used its tools to spread disinformation | OpenAI

[ad_1]

On Thursday, OpenAI released its first report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company has disrupted disinformation campaigns coming from Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and publish propaganda content on social media platforms and translate their content into different languages. Neither campaign gained traction or reached a large audience, according to the report.

As generative AI has become a booming industry, there have been widespread concern among researchers and legislators over its potential for increase in quantity and quality online misinformation. Artificial intelligence companies like OpenAI, which makes ChatGPT, have tried, with mixed results, to assuage these concerns and put railings on their technology.

The 39-page OpenAI report is one of the most detailed reports by an AI company about the use of its software for propaganda. OpenAI said its researchers had discovered and banned accounts linked to five covert influence operations over the past three months that were from a mix of government and private actors.

in Russia, two operations created and distributed content critical of the United States, Ukraine, and several Baltic nations. One of the operations used an OpenAI model to debug the code and create a bot published on Telegram. China’s influence operation generated text in English, Chinese, Japanese and Korean, which operatives then posted on Twitter and Medium.

Iranian actors generated full articles attacking the US and Israel, which they translated into English and French. An Israeli political firm called Stoic runs a network of fake social media accounts that create a range of content, including posts accusing American student protests against Israel’s war in Gaza as anti-Semitic.

Several of the spreaders of disinformation that OpenAI banned from its platform were already known to researchers and authorities. US Treasury sanctioned two Russians in March, who were allegedly behind one of the campaigns discovered by OpenAI, while Meta also banned Stoic from its platform this year for violating its policies.

The report also highlights how generative AI is being incorporated into disinformation campaigns as a means of improving some aspects of content generation, such as creating more persuasive foreign-language posts, but it is not the only propaganda tool.

“All of these operations use AI to some extent, but none use it exclusively,” the report said. “Instead, the AI-generated material was just one of many types of content they published, alongside more traditional formats such as handwritten texts or memes copied from around the internet.”

While none of the campaigns produced a noticeable impact, their use of the technology shows how malicious actors are discovering that generative AI allows them to increase propaganda production. Writing, translating and publishing content can now be done more efficiently through the use of AI tools, lowering the bar for creating disinformation campaigns.

In the past year, malicious actors have used generative AI in countries around the world to try to influence politics and public opinion. Deepfake audio, AI generated images and text campaigns have been used interfere with election campaignsleading to increased pressure on companies like OpenAI to limit the use of their tools.

OpenAI said it plans to periodically publish such reports on covert influence operations, as well as remove accounts that violate its policies.

[ad_2]

نوشته های مشابه

دکمه بازگشت به بالا