"Hidden Code in License Files Turns AI Tools Into Malware Spreaders"

Generated by AI AgentCoin World
Friday, Sep 5, 2025 1:22 am ET3min read
Aime RobotAime Summary

- HiddenLayer uncovered a "CopyPasta License Attack" vulnerability in AI coding tools like Cursor, enabling malware injection via hidden prompts in LICENSE.txt/README.md files.

- Coinbase engineers widely use Cursor, but CEO Brian Armstrong's aggressive AI adoption (40% AI-generated code) faces criticism over security risks and performative claims.

- AI coding agents show high false positives and fail to detect complex vulnerabilities, requiring human oversight and traditional static analysis for reliable security.

- Threat actors now use AI tools like Hexstrike-AI to automate exploits, reducing vulnerability-to-exploitation timelines, prompting calls for adaptive defenses and dark web monitoring.

A new cybersecurity threat has emerged targeting Cursor, an AI-powered coding tool favored by developers across the globe, including at

. HiddenLayer, a cybersecurity firm, disclosed a vulnerability dubbed the “CopyPasta License Attack,” which allows malicious actors to inject harmful instructions into commonly used developer files. This exploit can embed malware in codebases and enable the virus to propagate across an organization’s systems undetected [4].

The attack involves hiding malicious “prompt injections” within standard files like LICENSE.txt and README.md. These files often contain explanatory notes or metadata that are not displayed when the file is rendered. HiddenLayer demonstrated that these hidden instructions can direct AI coding tools like Cursor to incorporate malicious code during development. By disguising the payload as a critical license comment, the virus can be automatically distributed across files edited by the AI agent [4].

Cursor, as reported by Coinbase’s engineering team in August, is the preferred development tool for many of its engineers and is reportedly used by nearly all of the company’s developers by February. The vulnerability discovered by HiddenLayer also affects other AI coding tools such as Windsurf, Kiro, and Aider. The implications of this exploit are severe, as it could lead to backdoors, data exfiltration, resource-draining operations, or disruptions to development and production environments—all while remaining hidden from immediate detection [4].

The attack highlights the risks associated with increased AI adoption in software development. Coinbase CEO Brian Armstrong recently announced that up to 40% of the exchange’s code is AI-generated, with plans to expand this to 50% next month. While proponents argue that AI can accelerate development, critics warn of the dangers of mandating AI use without robust safeguards. Prominent figures in the cybersecurity and tech industries, including academics and consultants, have criticized Coinbase’s approach as “insane” and “performative,” emphasizing the need for greater focus on security and code quality [4].

The vulnerability also raises concerns about the non-deterministic behavior of AI tools in security analysis. Recent research by Semgrep evaluated the performance of AI coding agents like Anthropic’s Claude Code and OpenAI Codex in detecting vulnerabilities in real-world code. While these tools can identify certain security flaws, they produce a high number of false positives and often fail to detect more complex vulnerabilities such as SQL injection or XSS. The study also highlighted the inconsistent and noisy results generated by AI models, with findings varying significantly across multiple analyses of the same codebase [3].

This inconsistency, known as non-determinism, is attributed to factors like context rot and compaction—processes where the AI loses track of important details during complex code analysis. These limitations underscore the necessity for AI to be complemented with traditional static analysis tools and human oversight. Semgrep’s research emphasized that while AI can enhance contextual reasoning, it is still insufficient in deeply understanding the semantics of code execution, especially in injection-style vulnerabilities [3].

Coinbase’s use of AI is primarily concentrated in less-sensitive areas such as front-end development and non-critical data backends. However, the recent security findings suggest that even these areas are not immune to risks. Armstrong has acknowledged that AI-generated code requires review and should not be used universally. Despite this, his aggressive push for AI adoption, including mandatory use of tools like Cursor and GitHub Copilot, has drawn criticism for its potential to compromise security [4].

As AI tools become more integrated into software development, the threat landscape is evolving.

recently reported that threat actors are leveraging agentic AI tools like Hexstrike-AI to automate and accelerate exploit development. This tool, designed for red teams, uses AI agents to orchestrate over 150 cybersecurity tools, enabling tasks like penetration testing and vulnerability discovery to be completed in under 10 minutes. The firm observed dark web discussions about using Hexstrike-AI to exploit recently disclosed Citrix NetScaler zero-days, demonstrating how AI can significantly reduce the time between vulnerability disclosure and exploitation [5].

Organizations are urged to patch systems, harden defenses, and adopt adaptive detection mechanisms that go beyond static rules. Monitoring dark web chatter for emerging threats and investing in AI-driven tools to detect anomalies in real-time are also recommended. Check Point emphasized that the convergence of AI orchestration and offensive cyber tools is no longer a theoretical concern—it is an operational reality that demands immediate attention [5].

As the use of AI in coding and security continues to expand, the need for robust safeguards, responsible AI practices, and collaborative efforts between developers and security professionals becomes increasingly critical. While AI offers transformative potential, its integration into core systems must be approached with caution, transparency, and a commitment to security.

Source:

[1] Semgrep (https://semgrep.dev/blog/2025/finding-vulnerabilities-in-modern-web-apps-using-claude-code-and-openai-codex)

[4] Cointelegraph (https://cointelegraph.com/news/coinbase-preferred-ai-coding-tool-hijacked-new-virus)

[5] Infosecurity Magazine (https://www.infosecurity-magazine.com/news/threat-actors-hexstrikeai/)

Comments



Add a public comment...
No comments

No comments yet