AI-Powered Crypto Scams Surge 40% in 2024 Using Deepfakes

Coin WorldTuesday, Jun 10, 2025 6:20 am ET
1min read

A new joint report from

, SlowMist, and Elliptic has shed light on the escalating threat of AI-powered crypto crime, emphasizing the growing sophistication of social engineering scams. The report, released on June 10, underscores how these scams are increasingly exploiting human psychology to deceive victims. Nearly 40% of high-value frauds in 2024 involved deepfake technology, with scammers using AI-generated videos of prominent figures like Elon Musk to lend credibility to fraudulent projects. These deepfakes are not only used to create social proof but also to bypass Know Your Customer (KYC) systems and even lure victims into live phishing sessions via platforms like .

The report highlights various tactics employed by scammers, including the use of AI-generated job offers that trick developers into downloading Trojan viruses disguised as project files. These viruses can take over the victim’s computer, providing scammers with unauthorized access to sensitive information. The report warns that such scams are becoming more prevalent in the job market, where scammers pose as recruiters to exploit job seekers.

To protect against these evolving threats, blockchain security firm SlowMist recommends several precautionary measures. Users are advised to be highly skeptical of promotional content on social media, especially posts offering jobs, trading bots, or high staking returns. The report emphasizes the importance of verifying information through official websites or trusted sources before engaging with any offers. Additionally, users should be wary of videos featuring public figures promoting crypto launches, as these can often be deepfakes designed to mislead.

Ask Aime: Is the rise of AI-powered crypto scams affecting the stock market?

SlowMist also warns against clicking on links or downloading files shared in group chats or social media comments. Tools like ScamSniffer can help by automatically blocking phishing links, while MistTrack can be used to check if a wallet address is associated with known scams. The report concludes by stressing the need for collective defense in an era where AI can mimic anyone, urging users to approach all online interactions with a healthy dose of skepticism.