AI code suggestions sabotage software supply chain
9/3/2025 02:17pm
AI code suggestions can pose significant risks to the software supply chain by introducing malicious code through hallucinated package names. Here's how the AI's suggestions can be exploited:
1. **Hallucinated Package Names**: AI coding assistants, powered by large language models, sometimes suggest package names that do not exist. This phenomenon, known as hallucination, can lead to the creation of malicious software packages by attackers who exploit these hallucinated names.
2. **Typosquatting and Slopsquatting**: Attackers can create malicious packages under suggested but non-existent package names, which can then be uploaded to package registries like PyPI or npm. When an AI code assistant re-hallucinates the name, it can inadvertently install the malware by installing dependencies and executing the code. This form of attack is akin to typosquatting, where common terms are misspelled to dupe users, and slopsquatting, a term coined by Seth Michael Larson, a security developer-in-residence at the Python Software Foundation, to describe AI-generated misspellings.
3. **Bimodal Pattern of Hallucination**: The recurrence of hallucinated names follows a bimodal pattern, where some names are repeated consistently while others vanish entirely. This unpredictability makes it challenging to develop foolproof defenses against such attacks.
4. **Impact on Software Supply Chain**: The integration of malicious AI models into the software supply chain can lead to the execution of unauthorized code within systems. This can result in data exfiltration, manipulation of data integrity, and unauthorized access to critical systems. The lack of rigorous testing for AI models creates an opportunity for adversaries to inject malicious functionality.
5. **Organizational Vulnerabilities**: Organizations may unknowingly introduce malicious AI models into their systems through compromised development tools, tampered libraries, or pre-trained models. The absence of robust validation processes for AI models allows for the infiltration of malicious models, which can compromise trust in software components and vendors.
In conclusion, the use of AI code suggestions can introduce new risks to the software supply chain. It is crucial for developers and organizations to be aware of these risks and implement strict validation processes to ensure the integrity of AI models and prevent the introduction of malicious code.