ChatGPT bans multiple accounts linked to Iranian operation creating false news reports
PorAinvest
viernes, 16 de agosto de 2024, 8:20 pm ET1 min de lectura
OpenAI deactivated several ChatGPT accounts using the artificial intelligence chatbot to spread disinformation as part of an Iranian influence operation, the company reported Friday.
In a recent blog post, OpenAI, the leading artificial intelligence (AI) company, announced the deactivation of several ChatGPT accounts that were part of an Iranian influence operation spreading disinformation about the U.S. presidential election [1]. This is not the first time OpenAI has encountered such malicious activities using its AI chatbot. In May, the company disrupted five campaigns that aimed to manipulate public opinion through ChatGPT [1].According to OpenAI, the operation created AI-generated articles and social media posts, although it is unclear how much of an audience these efforts reached [1]. The company's investigation revealed that the cluster of accounts was part of a broader Iranian campaign to influence U.S. elections, which Microsoft's Threat Intelligence report identified as Storm-2035 [2]. Microsoft noted that Storm-2035 is an Iranian network with multiple sites imitating news outlets and actively engaging U.S. voter groups on opposing ends of the political spectrum with polarizing messaging on various topics [2].
OpenAI's approach to tackling these malicious activities resembles that of social media companies dealing with similar issues. The company appears to be adopting a whack-a-mole strategy, banning accounts associated with these efforts as they come up [1]. Microsoft's report on Storm-2035 has significantly benefited OpenAI's investigation, providing valuable insights into the group's tactics and motivations.
The use of AI-generated content in influence operations is not a new phenomenon. In previous election cycles, state actors have employed social media platforms like Facebook and Twitter to disseminate misinformation and sway public opinion [3]. However, the increasing adoption of AI tools like ChatGPT by these groups poses new challenges in detecting and combating such activities [1].
References:
[1] TechCrunch. OpenAI Shuts Down Election Influence Operation Using ChatGPT. August 16, 2024. https://techcrunch.com/2024/08/16/openai-shuts-down-election-influence-operation-using-chatgpt/
[2] Microsoft 365 Defender Threat Intelligence. Storm-2035: A Long-Running Iranian Phishing Campaign Targeting U.S. Election Candidates and Influencers. July 29, 2024. https://www.microsoft.com/en-us/security/blog/storm-2035-a-long-running-iranian-phishing-campaign-targeting-us-election-candidates-and-influencers/
[3] The New York Times. Russia's Social Media War on America, Explained. October 19, 2018. https://www.nytimes.com/2018/10/19/us/politics/russia-social-media-war-america-explained.html

Divulgación editorial y transparencia de la IA: Ainvest News utiliza tecnología avanzada de Modelos de Lenguaje Largo (LLM) para sintetizar y analizar datos de mercado en tiempo real. Para garantizar los más altos estándares de integridad, cada artículo se somete a un riguroso proceso de verificación con participación humana.
Mientras la IA asiste en el procesamiento de datos y la redacción inicial, un miembro editorial profesional de Ainvest revisa, verifica y aprueba de forma independiente todo el contenido para garantizar su precisión y cumplimiento con los estándares editoriales de Ainvest Fintech Inc. Esta supervisión humana está diseñada para mitigar las alucinaciones de la IA y garantizar el contexto financiero.
Advertencia sobre inversiones: Este contenido se proporciona únicamente con fines informativos y no constituye asesoramiento profesional de inversión, legal o financiero. Los mercados conllevan riesgos inherentes. Se recomienda a los usuarios que realicen una investigación independiente o consulten a un asesor financiero certificado antes de tomar cualquier decisión. Ainvest Fintech Inc. se exime de toda responsabilidad por las acciones tomadas con base en esta información. ¿Encontró un error? Reportar un problema



Comentarios
Aún no hay comentarios