ChatGPT bans multiple accounts linked to Iranian operation creating false news reports
Friday, Aug 16, 2024 8:20 pm ET
OpenAI deactivated several ChatGPT accounts using the artificial intelligence chatbot to spread disinformation as part of an Iranian influence operation, the company reported Friday.
In a recent blog post, OpenAI, the leading artificial intelligence (AI) company, announced the deactivation of several ChatGPT accounts that were part of an Iranian influence operation spreading disinformation about the U.S. presidential election [1]. This is not the first time OpenAI has encountered such malicious activities using its AI chatbot. In May, the company disrupted five campaigns that aimed to manipulate public opinion through ChatGPT [1].According to OpenAI, the operation created AI-generated articles and social media posts, although it is unclear how much of an audience these efforts reached [1]. The company's investigation revealed that the cluster of accounts was part of a broader Iranian campaign to influence U.S. elections, which Microsoft's Threat Intelligence report identified as Storm-2035 [2]. Microsoft noted that Storm-2035 is an Iranian network with multiple sites imitating news outlets and actively engaging U.S. voter groups on opposing ends of the political spectrum with polarizing messaging on various topics [2].
OpenAI's approach to tackling these malicious activities resembles that of social media companies dealing with similar issues. The company appears to be adopting a whack-a-mole strategy, banning accounts associated with these efforts as they come up [1]. Microsoft's report on Storm-2035 has significantly benefited OpenAI's investigation, providing valuable insights into the group's tactics and motivations.
The use of AI-generated content in influence operations is not a new phenomenon. In previous election cycles, state actors have employed social media platforms like Facebook and Twitter to disseminate misinformation and sway public opinion [3]. However, the increasing adoption of AI tools like ChatGPT by these groups poses new challenges in detecting and combating such activities [1].
References:
[1] TechCrunch. OpenAI Shuts Down Election Influence Operation Using ChatGPT. August 16, 2024. https://techcrunch.com/2024/08/16/openai-shuts-down-election-influence-operation-using-chatgpt/
[2] Microsoft 365 Defender Threat Intelligence. Storm-2035: A Long-Running Iranian Phishing Campaign Targeting U.S. Election Candidates and Influencers. July 29, 2024. https://www.microsoft.com/en-us/security/blog/storm-2035-a-long-running-iranian-phishing-campaign-targeting-us-election-candidates-and-influencers/
[3] The New York Times. Russia's Social Media War on America, Explained. October 19, 2018. https://www.nytimes.com/2018/10/19/us/politics/russia-social-media-war-america-explained.html