
A newly discovered vulnerability in ChatGPT’s calendar integration can be exploited to hijack users’ email accounts, according to researchers at EdisonWatch. The attack works by sending a crafted calendar invite that contains a “jailbreak prompt,” which, when processed, allows ChatGPT to access private email content and leak it to an attacker.
How the Calendar Integration Flaw Works
The exploit begins with an attacker sending a calendar invite with embedded malicious instructions. The invite doesn’t even need to be accepted by the target.
When the user then asks ChatGPT something like “What’s on my calendar today?”, the assistant reads the event description and follows the hidden instructions to search through the user’s email and forward sensitive content. Crucially, this depends on the user having previously enabled ChatGPT’s Gmail and Google Calendar connectors.
Under What Conditions the Threat Exists
OpenAI introduced native Gmail, Calendar, and Contacts connectors via the Model Context Protocol (MCP) in mid-August 2025. Once enabled, these connectors allow ChatGPT to automatically reference data from those linked services. The risk arises when combined with indirect prompt injection: hidden or malicious content in calendar data that the assistant is allowed to read.
Even though the feature is powerful, there are ways for users to reduce exposure. OpenAI’s documentation notes users can disable automatic linking of these sources or disconnect them.
What You Can Do To Protect Yourself
- Disable or limit automatic linking of Gmail, Calendar, and Contacts in ChatGPT, so you control when and how data is used.
- Restrict calendar settings: disable automatic addition of invites, allow only invites from known senders, and hide declined events.
- Be cautious when enabling new browser or API connectors for AI tools. Understand what permissions you’re granting and who can trigger actions.
Calendar invites are often considered harmless, which means many users may not realize they’re exposing their emails via seemingly benign actions. The danger is heightened by user behavior: many people trust tools implicitly and may approve prompts without scrutiny.