Google AI Email Security Flaw Allows Hidden Phishing Commands

A significant security vulnerability has been uncovered in Google’s Gemini AI integration within its Workspace platform, potentially allowing cyberattacks to exploit AI-generated email summaries. Security researchers from Mozilla’s 0Din lab have identified a method that enables attackers to inject hidden instructions into email summaries without triggering standard email security measures. By using HTML and CSS to set text to zero font size and white color, attackers can conceal commands that Gemini AI is then capable of processing as part of its summaries. This method can trick the AI into displaying false security alerts or urgent instructions that appear to originate from Google, increasing the likelihood that users would trust the content. The technique bypasses protective measures Google implemented against prompt injection attacks, which were introduced in 2024. In a demonstration, the AI was successfully manipulated to alert a user about a compromised password with a fake support number. Google has acknowledged the vulnerability and confirmed they are rolling out updated security measures. However, the flaw still highlights the risks of relying on AI to filter emails, particularly with the growing integration of AI into productivity tools and the evolution of phishing tactics.