Weaponizing Calendar Invites: A Semantic Attack on Google Gemini
Summary:
Security researchers at Miggo recently disclosed a critical "semantic" vulnerability within the Google Gemini ecosystem that allows attackers to bypass privacy controls and exfiltrate sensitive data via malicious calendar invites. This attack utilizes Indirect Prompt Injection, where an attacker sends a standard calendar invite containing a hidden, "dormant" natural language payload in the description field. When a user later asks Gemini a routine question about their schedule (e.g., "What does my day look like?"), the AI parses the malicious invite, triggering the embedded instructions. During testing, the researchers successfully forced Gemini to summarize private meeting details from other calendar entries and exfiltrate that data by creating a new, publicly visible calendar event, all while providing the user with a benign, deceptive response to mask the activity.
Security Officer Comments:
This research highlights a significant shift in the threat landscape where the "attack surface" is no longer just code, but human language. This is particularly concerning for organizations integrated into the Google Workspace or Microsoft 365 ecosystems, where AI assistants have deep permissions to read and write across mail, calendar, and document suites. The primary impact on organizations is the potential for highly automated industrial espionage. An attacker does not need to compromise a network or bypass a firewall; they simply need to send an invite to an employee’s email address. Because the payload is "semantic" (meaning it looks like a normal request rather than malicious code like SQL injection), traditional signature-based security tools (WAFs/SEG) are unlikely to flag it.
Suggested Corrections:
To defend against these emerging semantic attacks and indirect prompt injections, the following mitigations are recommended:
Link(s):
https://www.miggo.io/post/weaponizing-calendar-invites-a-semantic-attack-on-google-gemini
Security researchers at Miggo recently disclosed a critical "semantic" vulnerability within the Google Gemini ecosystem that allows attackers to bypass privacy controls and exfiltrate sensitive data via malicious calendar invites. This attack utilizes Indirect Prompt Injection, where an attacker sends a standard calendar invite containing a hidden, "dormant" natural language payload in the description field. When a user later asks Gemini a routine question about their schedule (e.g., "What does my day look like?"), the AI parses the malicious invite, triggering the embedded instructions. During testing, the researchers successfully forced Gemini to summarize private meeting details from other calendar entries and exfiltrate that data by creating a new, publicly visible calendar event, all while providing the user with a benign, deceptive response to mask the activity.
Security Officer Comments:
This research highlights a significant shift in the threat landscape where the "attack surface" is no longer just code, but human language. This is particularly concerning for organizations integrated into the Google Workspace or Microsoft 365 ecosystems, where AI assistants have deep permissions to read and write across mail, calendar, and document suites. The primary impact on organizations is the potential for highly automated industrial espionage. An attacker does not need to compromise a network or bypass a firewall; they simply need to send an invite to an employee’s email address. Because the payload is "semantic" (meaning it looks like a normal request rather than malicious code like SQL injection), traditional signature-based security tools (WAFs/SEG) are unlikely to flag it.
Suggested Corrections:
To defend against these emerging semantic attacks and indirect prompt injections, the following mitigations are recommended:
- Restrict Calendar Auto-Add: Organizations should configure Google Workspace or Outlook settings to prevent calendar invites from unknown external senders from being automatically added to user calendars. This forces a manual review before the AI assistant ever "sees" the malicious payload.
- Implement AI Access Guardrails: Where possible, utilize administrative controls to limit the scopes of AI assistants. Restrict the AI's ability to "create" or "modify" resources (like creating new public events) unless explicitly confirmed by a manual user action (Human-in-the-Loop).
- User Awareness Training: Update security awareness programs to include "Prompt Injection" risks. Employees should be cautioned that interacting with AI assistants regarding content received from external sources (emails, invites, shared docs) can trigger unauthorized actions.
Link(s):
https://www.miggo.io/post/weaponizing-calendar-invites-a-semantic-attack-on-google-gemini