Google has patched a security vulnerability that allowed hackers to remotely hijack its Gemini AI assistant by sending malicious calendar invitations to victims.
The flaw enabled attackers to access emails, control smart home devices and track user locations without requiring any interaction from victims beyond normal use of the assistant.
Researchers at SafeBreach Labs discovered the vulnerability works by embedding harmful commands in Google Calendar event titles. When users ask Gemini about their schedule, the assistant processes these hidden instructions as legitimate requests.
The bug affected all versions of Gemini, including web browsers, mobile applications and Android voice assistants connected to Google Workspace. Hackers could exploit the AI’s permissions to access Gmail, Calendar and connected home devices.
Google says no exploitation occurred before the company implemented fixes.
Attack required basic skills
The vulnerability bypassed existing security measures and required no advanced technical knowledge. Researchers demonstrated successful attacks using standard calendar features available to any Google user.
Attackers could send up to six calendar invitations to maintain stealth, hiding malicious commands in the final invitation. Google Calendar displays only five recent events directly, concealing additional entries behind a “Show more” button that Gemini still processes during queries.

The attack method exploited “context poisoning,” where hidden commands become part of Gemini’s conversation history. This causes the AI to follow hostile instructions while users remain unaware of any compromise.
SafeBreach researchers demonstrated multiple attack capabilities during their investigation. These included triggering spam campaigns, generating inappropriate content and remotely deleting victim calendar entries.
More serious capabilities involved controlling smart home devices through Google Home integration. Researchers successfully opened windows, adjusted heating systems and controlled lighting by exploiting the assistant’s connections to internet-enabled devices.
Privacy and security risks
The vulnerability enabled location tracking through forced website visits that captured victim IP addresses. Attackers could also initiate unauthorized video calls, potentially enabling surveillance through device cameras and microphones.
Data theft represented a significant risk. The flaw allowed extraction of email content and calendar information through specially crafted web addresses that transmitted sensitive data to attacker-controlled servers.
Mobile versions faced additional exposure due to Gemini’s integration with Android system functions. This connection allowed manipulation of phone features including application launches, screenshot capture and media controls.
Researchers bypassed URL security restrictions using redirect services that forced Chrome to open malicious websites. This worked because Gemini automatically followed redirects without displaying security warnings normally shown in browser sessions.
The attack also supported “delayed execution” where malicious instructions activated during future user interactions. This persistence allowed attackers to maintain access across multiple Gemini sessions.
High risk rating
The research team developed a threat analysis framework specifically for AI-powered applications. Their evaluation assessed attack feasibility, required expertise and potential damage across privacy, financial, safety and operational categories.
Results classified 73% of identified threats as high or critical risk, requiring immediate remediation. The assessment found these attacks require significantly less technical skill than traditional cyber threats while potentially causing greater harm.
The vulnerability demonstrated movement between different Gemini functions and extension beyond application boundaries to manipulate external systems outside Google’s direct control.
Google addressed the reported vulnerabilities before any documented exploitation occurred. The company implemented enhanced user confirmation requirements for sensitive actions and improved web address handling with validation protocols.
Advanced detection systems now employ content analysis algorithms to identify malicious instructions. These protections underwent extensive internal testing before deployment to all Gemini users.
Andy Wen, senior director of security product management for Google Workspace, acknowledged the researchers’ responsible disclosure approach and said it accelerated deployment of new protective measures.
Wider implications
The discovery highlights security challenges facing artificial intelligence integration across digital services. Traditional cybersecurity approaches targeting software bugs may prove insufficient for AI-integrated systems.
Security specialists anticipate shifts in application attack methods as AI adoption increases. These new threats target reasoning processes rather than code vulnerabilities, representing a fundamental change in attack methodology.
User trust in AI assistants compounds the security risk since people typically accept system recommendations without questioning potential external manipulation.
SafeBreach researchers Or Yair, Ben Nassi and Stav Cohen notified Google in February 2025 following responsible disclosure procedures. The team provided detailed technical documentation and collaborated during the remediation process.
The findings were presented at Black Hat USA and DEF CON 33 security conferences. Complete research documentation enables organizations to assess similar risks in their AI-powered systems.
Future threats may include attacks requiring no user interaction and methods targeting multiple users simultaneously through public platforms.
Read next: Man Develops Rare Bromide Poisoning After Following AI Diet Suggestion