2 min read
Google Gemini flaw allowed data exposure through malicious invites
Farah Amod
February 16, 2026
Researchers found that crafted calendar invitations could trigger unauthorized data access without user interaction.
What happened
Security researchers disclosed a vulnerability in Google Gemini that enabled attackers to exploit indirect prompt injection via Google Calendar invites. According to The Hacker News, a malicious actor could embed a hidden natural language prompt inside a calendar event description. If the user later asked Gemini a routine question about their schedule, the chatbot could be manipulated into summarizing private calendar data and writing it into a new event that was visible to the attacker. The issue was quickly reported and has since been fixed by Google.
Going deeper
The attack starts with a calendar invitation sent by the attacker to the target. The event description includes a hidden prompt intended to influence Gemini at a later stage. When the user asks Gemini about upcoming meetings, the model processes the embedded instruction and generates a new calendar entry containing summaries of private meetings. In some enterprise setups, the newly created event can be accessed by the attacker, allowing data to be collected without alerting the victim. Researchers noted that the attack does not require clicks, approvals, or unusual user behavior beyond a routine scheduling query, making it potentially more unassuming to even risk-averse users.
What was said
Dark Reading reported that the flaw exposes a structural weakness in how AI-integrated products interpret intent when processing natural language. Although Google has safeguards in place to detect malicious prompts, the researchers have shown that those controls could be sidestepped by embedding instructions in ordinary calendar invites that Gemini later acted on as part of its normal workflow.
The report noted that “AI native features introduce a new class of exploitability,” because “AI applications can be manipulated through the very language they’re designed to understand.” As a result, security gaps can come from how an AI interprets words and context, not just from flaws in software code. That means data can be exposed even when access permissions are set correctly.
In the know
Recent research into AI coding tools shows that faster development often comes with security problems. A benchmark from Tenzai tested popular coding agents such as Cursor, Claude Code, OpenAI Codex, Replit, and Devin across fifteen sample applications and found sixty-nine security issues. Every tool produced insecure code, including weak authentication, missing access controls, server-side request forgery, and unsafe default settings. The testing showed that these tools tend to skip basic security checks unless developers step in and add them. Researchers warned that teams using AI-generated code without careful review risk pushing those weaknesses into live systems.
The big picture
The Gemini calendar issue lands at a moment when regulators are openly warning that AI systems are starting to act more like autonomous operators than passive tools. In a January 2026 filing, the Center for AI Safety and Innovation said modern AI agents are “capable of taking autonomous actions that impact real-world systems,” and may be exposed to hijacking, backdoor abuse, or other exploits. The concern isn’t only about attackers breaking into models. CAISI also warned that even “uncompromised models” can pose risks to “confidentiality, availability, or integrity” when they misinterpret inputs or act on the wrong objectives. The Gemini flaw shows how that plays out in practice. No permissions were bypassed, and no systems were hacked, but a hidden instruction buried in ordinary calendar text was enough to pull private information across application boundaries.
FAQs
What is indirect prompt injection?
It occurs when an attacker hides instructions inside content that an AI later processes, causing unintended actions without the user knowingly issuing a command.
Why was Google Calendar involved in this issue?
Gemini had permission to read and create calendar events, which allowed attackers to use calendar descriptions as a data channel.
Did users need to click the malicious invite?
No. The attack could activate later when the user asks Gemini about their schedule.
Why are AI assistants expanding the attack surface?
They can access multiple services and act autonomously, which increases the impact of misinterpreted instructions.
How can organizations reduce this risk?
They can limit cross-application permissions, monitor AI-generated actions, and treat all user-supplied content as potentially hostile.
Subscribe to Paubox Weekly
Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.
