Zenity Labs has exposed widespread vulnerabilities in major enterprise AI agents, including OpenAI's ChatGPT, Microsoft Copilot Studio, Salesforce Einstein, and others. The company's research, presented at Black Hat USA 2025, demonstrated 0click exploit chains that allow attackers to silently compromise these agents, exfiltrate data, manipulate workflows, and act autonomously across enterprise systems without user interaction. The findings represent a fundamental shift in the AI security landscape to fully automated attacks.
Zenity Labs has revealed significant vulnerabilities in major enterprise AI agents, including OpenAI's ChatGPT, Microsoft Copilot Studio, Salesforce Einstein, and others. The company's research, presented at Black Hat USA 2025, demonstrated 0click exploit chains that allow attackers to silently compromise these agents, exfiltrate data, manipulate workflows, and act autonomously across enterprise systems without user interaction. The findings represent a fundamental shift in the AI security landscape to fully automated attacks [1].
The research, led by Zenity co-founder and CTO Michael Bargury and threat researcher Tamir Ishay Sharbat, showcased working exploits against several high-profile AI tools. OpenAI ChatGPT was compromised via email-triggered prompt injection, granting attackers access to connected Google Drive accounts and the ability to implant malicious memories, compromise future sessions, and transform ChatGPT into a malicious agent. Microsoft Copilot Studio was shown to leak entire CRM databases, and Salesforce Einstein was manipulated through malicious case creation to reroute all customer communications to attacker-controlled email addresses. Google Gemini and Microsoft 365 Copilot were turned into malicious insiders, social engineering users and exfiltrating sensitive conversations through booby-trapped emails and calendar invites. Cursor with Jira MCP was exploited to harvest developer credentials through weaponized ticket workflows [1].
The findings highlight the urgent need for enterprises to reassess their security approaches and invest in agent-centric security platforms. While some vendors, including OpenAI and Microsoft Copilot Studio, issued patches following responsible disclosure, multiple vendors declined to address the vulnerabilities, citing them as intended functionality. This mixed response underscores a critical gap in how the industry approaches AI agent security [1].
The rapid adoption of AI agents has created an attack surface that most organizations don't even know exists. With ChatGPT reaching 800 million weekly active users and Microsoft 365 Copilot seats growing 10x in just 17 months, organizations are rapidly deploying AI agents without adequate security controls. Zenity Labs' findings suggest that enterprises relying solely on vendor mitigations or traditional security tools are leaving themselves exposed to an entirely new class of attacks [1].
The industry response and implications of Zenity Labs' research are significant. The company's agent-centric security platform aims to give enterprises the visibility and control they desperately need. As a research-driven security company, Zenity Labs conducts this threat intelligence on behalf of the wider AI community, ensuring defenders have the same insights as attackers. The complete research, including technical breakdowns and defense recommendations, will be available at labs.zenity.io following the presentation [1].
References:
[1] https://www.prnewswire.com/news-releases/zenity-labs-exposes-widespread-agentflayer-vulnerabilities-allowing-silent-hijacking-of-major-enterprise-ai-agents-circumventing-human-oversight-302523580.html
Comments
No comments yet