Asia Sees 87 Deepfake Scam Rings Dismantled in 2025

Generated by AI AgentCoin World
Tuesday, Jun 10, 2025 8:15 am ET2min read

In the first quarter of 2025, a significant crackdown on deepfake scam rings was reported across Asia. The 2025 Anti-Scam Month Research Report, co-authored by Bitget, SlowMist, and Elliptic, revealed that 87 deepfake-driven scam rings were dismantled. This alarming statistic highlights the escalating threat of AI-driven scams in the cryptocurrency space.

The report also noted a 24% year-on-year increase in global crypto scam losses, totaling $4.6 billion in 2024. Nearly 40% of high-value fraud cases involved deepfake technologies, with scammers using sophisticated impersonations of public figures, founders, and platform executives to deceive users. The speed at which scammers can generate synthetic videos, coupled with the viral nature of social media, gives deepfakes a unique advantage in both reach and believability.

Defending against AI-driven scams requires more than just technological solutions; it necessitates a fundamental change in mindset. In an era where synthetic media can convincingly imitate real people and events, trust must be carefully earned through transparency, constant vigilance, and rigorous verification at every stage. The report details the anatomy of modern crypto scams, pointing to three dominant categories: AI-generated deepfake impersonations, social engineering schemes, and Ponzi-style frauds disguised as DeFi or GameFi projects. Deepfakes are particularly insidious as AI can simulate text, voice messages, facial expressions, and even actions. For example, fake video endorsements of investment platforms from public figures such as Singapore’s Prime Minister and Elon Musk are tactics used to exploit public trust via Telegram, X, and other social media platforms.

AI can even simulate real-time reactions, making these scams increasingly difficult to distinguish from reality. Sandeep Narwal, co-founder of the blockchain platform Polygon, raised the alarm in a May 13 post on X, revealing that bad actors had been impersonating him via

. He mentioned that several people had contacted him on Telegram, asking if he was on a Zoom call with them and whether he was requesting them to install a script. SlowMist CEO also issued a warning about Zoom deepfakes, urging people to pay close attention to the domain names of Zoom links to avoid falling victim to such scams.

As AI-powered scams grow more advanced, users and platforms need new strategies to stay safe. Deepfake videos, fake job tests, and phishing links are making it harder than ever to spot fraud. For institutions, regular security training and strong technical defenses are essential. Businesses are advised to run phishing simulations, protect email systems, and monitor code for leaks. Building a security-first culture—where employees verify before they trust—is the best way to stop scams before they start. Gracy, CEO of Bitget, offers everyday users a straightforward approach: “Verify, isolate, and slow down.” She further said: “Always verify information through official websites or trusted social media accounts—never rely on links shared in Telegram chats or Twitter comments.” She also stressed the importance of isolating risky actions by using separate wallets when exploring new platforms.

Comments



Add a public comment...
No comments

No comments yet