Cybersecurity experts have issued a warning about a critical vulnerability called ShadowLeak, which exploited ChatGPT’s Deep Research agent to steal personal data from Gmail accounts. The attack was made possible by hidden commands embedded in emails, which triggered the AI agent to exfiltrate sensitive information without user interaction. The discovery of this zero-click vulnerability by Radware researchers in June 2025 highlights a significant risk in AI-driven platforms that integrate with email services like Gmail.
Following the discovery, OpenAI addressed the issue by patching the vulnerability in early August. However, cybersecurity experts caution that similar flaws could emerge as AI integrations become more widespread across platforms like Gmail, Dropbox, and SharePoint. The attack mechanism involved embedding instructions in emails using white-on-white text or CSS layout tricks, making the emails appear harmless to users. When users prompted the Deep Research agent to analyze their Gmail inbox, the AI executed the hidden commands without question, allowing the theft of sensitive data through OpenAI’s cloud environment.
While the Deep Research agent was originally designed for multistep research and summarizing online data, its access to third-party apps like Gmail, Google Drive, and Dropbox also opened the door to abuse by attackers. Researchers at Radware explained that the attack involved encoding personal data in Base64 and appending it to a malicious URL, which was disguised as a