OpenAI: unable to independently verify if social media ‘probe’ tool has been used by a Chinese government entity
PorAinvest
martes, 7 de octubre de 2025, 6:24 am ET1 min de lectura
OpenAI: unable to independently verify if social media ‘probe’ tool has been used by a Chinese government entity
OpenAI has released a report detailing suspected Chinese government operatives' use of ChatGPT for large-scale surveillance and monitoring. The report, published on October 7, 2025, highlights how authoritarian regimes leverage AI technology for surveillance and repression .The report notes that Chinese operatives have asked ChatGPT to help write proposals for tools that analyze travel movements and police records of the Uyghur minority, as well as to design promotional materials for a tool that scans social media for "extremist speech." OpenAI banned both users involved in these requests .
Ben Nimmo, principal investigator at OpenAI, stated, "There’s a push within the People’s Republic of China to get better at using artificial intelligence for large-scale things like surveillance and monitoring. It’s not last year that the Chinese Communist Party started surveilling its own population. But now they’ve heard of AI and they’re thinking, oh maybe we can use this to get a little bit better" .
China's response to these allegations was swift. Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, DC, stated, "We oppose groundless attacks and slanders against China. China is rapidly building an AI governance system with distinct national characteristics, emphasizing a balance between development and security" .
The report also mentions that while the US and China are in a contest for AI supremacy, AI is often used for mundane tasks such as data crunching and language polishing, rather than groundbreaking technological achievements .
Despite these revelations, OpenAI could not independently verify if the social media 'probe' tool has been used by a Chinese government entity. The report remains a snapshot of the broader world of authoritarian abuses of AI, highlighting the need for international cooperation to prevent misuse .
Divulgación editorial y transparencia de la IA: Ainvest News utiliza tecnología avanzada de Modelos de Lenguaje Largo (LLM) para sintetizar y analizar datos de mercado en tiempo real. Para garantizar los más altos estándares de integridad, cada artículo se somete a un riguroso proceso de verificación con participación humana.
Mientras la IA asiste en el procesamiento de datos y la redacción inicial, un miembro editorial profesional de Ainvest revisa, verifica y aprueba de forma independiente todo el contenido para garantizar su precisión y cumplimiento con los estándares editoriales de Ainvest Fintech Inc. Esta supervisión humana está diseñada para mitigar las alucinaciones de la IA y garantizar el contexto financiero.
Advertencia sobre inversiones: Este contenido se proporciona únicamente con fines informativos y no constituye asesoramiento profesional de inversión, legal o financiero. Los mercados conllevan riesgos inherentes. Se recomienda a los usuarios que realicen una investigación independiente o consulten a un asesor financiero certificado antes de tomar cualquier decisión. Ainvest Fintech Inc. se exime de toda responsabilidad por las acciones tomadas con base en esta información. ¿Encontró un error? Reportar un problema



Comentarios
Aún no hay comentarios