AI Voice Cloning Deceives Binance's CZ, Raises Fraud Concerns

Generado por agente de IACoin World
jueves, 17 de abril de 2025, 5:42 pm ET2 min de lectura
ZM--

Changpeng “CZ” Zhao, the former CEO of Binance, recently encountered an AI-generated video that replicated his voice with such accuracy that he could not distinguish it from a real recording. The video, shared via X, featured an AI voice-over of Zhao speaking Mandarin, perfectly synchronized with his facial movements. This incident has reignited concerns about the unauthorized use of AI to impersonate public figures.

The video in question showcases Zhao speaking in Chinese over a series of video clips, AI-generated content, and photos. The high fidelity of the AI-generated voice and visuals has raised alarms about the potential misuse of such technology for fraudulent activities. This is not the first time Zhao has warned about impersonation attempts involving deepfakes. In October 2024, he advised users not to trust any video footage requesting crypto transfers, acknowledging the circulation of altered content bearing his likeness.

The latest incident adds to a growing list of cases where digital likenesses of crypto executives have been cloned using generative tools. In 2023, Binance’s then-Chief Communications Officer, Patrick Hillmann, disclosed that scammers used a video simulation of him to conduct meetings with project representatives via ZoomZM--. The synthetic footage was stitched together using years of public interviews and online appearances, enabling actors to schedule live calls with targets under the pretense of official exchange engagement.

Zhao’s experience suggests that voice replication has reached a level of realism that is indistinguishable even to the person being mimicked. This raises significant fraud risks beyond social media impersonation. In February, staff at Arup’s Hong Kong office were deceived into transferring approximately $25 million during a MicrosoftMSFT-- Teams meeting, believing they were speaking with their UK-based finance director. Every participant on the call was an AI-generated simulation, highlighting the sophistication and potential danger of such technology.

Voice-cloning capabilities have evolved to require minimal input. Tools that once depended on extensive voice samples now operate with only brief recordings. Many consumer-level systems, such as ElevenLabs, require less than 60 seconds of audio to generate a convincing clone. The financial institution reported in January that over one-quarter of UK adults believe they encountered scams involving cloned voices within the prior 12 months. These tools are increasingly available at low cost, with turnkey access to voice-to-voice cloning APIs purchasable for as little as $5 on darknet marketplaces.

While commercial models offer watermarking and opt-in requirements, open-source and black-market alternatives rarely adhere to such standards. The European Union’s Artificial Intelligence Act, formally adopted in March 2024, includes mandates that deepfake content must be clearly labeled when deployed in public settings. However, the law’s enforcement window remains distant, with full compliance not expected until 2026. Without active regulatory barriers, hardware manufacturers are beginning to integrate detection capabilities directly into consumer devices.

Mobile World Congress 2025 in Barcelona featured several demonstrations of on-device tools designed to detect audio or visual manipulation in real-time. While not yet commercially available, these implementations aim to reduce user dependence on external verification services. The increasing sophistication of AI-generated content underscores the need for robust detection and regulatory measures to mitigate the risks associated with deepfakes.

Comentarios



Add a public comment...
Sin comentarios

Aún no hay comentarios