As organizations consider agentic AI for their business and IT stacks, researchers continue to find bugs and vulnerabilities in major, commercial models that can significantly expand their attack surface.
This week, researchers at Pillar Security disclosed a vulnerability in Antigravity, an AI-powered developer tool for filesystem operations made by Google.
The bug, since patched, combined prompt injection with Antigravity’s permitted file-creation capability to grant attackers remote code execution privileges.
The research details how the exploit was able to circumvent Antigravity’s secure mode, Google’s highest security setting for its agents that runs all command operations through a virtual sandbox environment, throttles network access and prohibits the agent from writing code outside of the working directory.
Secure mode is supposed to limit the AI agent access to sensitive systems – and its ability to execute malicious or dangerous acts through shell commands. But one of the file-searching tools used by Antigravity, called “find_by_name,” is classified as a ‘native’ system tool. This means the agent can execute it directly and before protections like Secure Mode can even evaluate command level operations.
“The security boundary that Secure Mode enforces simply never sees this call,” wrote Dan Lisichkin, an AI security researcher with Pillar Security. “This means an attacker achieves arbitrary code execution under the exact configuration a security-conscious user would rely on to prevent it.”
The prompt injection attacks can be delivered through compromised identity accounts connected to the agent, or indirectly by hiding clandestine prompt instructions inside open-source files or web content the agent ingests. Antigravity has trouble distinguishing between written data it ingests for context and literal prompt instructions, so compromise can be achieved without any elevated access by getting it to read a malicious document or file.
According to a disclosure timeline provided by Pillar Security, the bug was reported to Google on Jan. 6 and patched on Feb. 28, with Google awarding a bug bounty for the discovery.
Lisichkin said this same pattern of prompt injection through unvalidated input has been found in other coding AI agents like Cursor. In the age of AI, any unvalidated input can become a malicious prompt capable of hijacking internal systems.
“The trust model underpinning security assumptions, that a human will catch something suspicious, does not hold when autonomous agents follow instructions from external content,” he wrote.
The fact that the vulnerability was able to completely bypass Google’s secure mode underscores how the cybersecurity industry must start adapting and “move beyond sanitization-based controls.”
“Every native tool parameter that reaches a shell command is a potential injection point. Auditing for this class of vulnerability is no longer optional, and it is a prerequisite for shipping agentic features safely,” Lisichkin wrote.
The post Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution appeared first on CyberScoop.
Author: rcf
-
Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution
-

Vercel Employee’s AI Tool Access Led to Data Breach
Stolen OAuth tokens, which are at the root of these breaches, “are the new attack surface, the new lateral movement,” a researcher noted.
-
Vercel’s security breach started with malware disguised as Roblox cheats
Vercel customers are at risk of compromise after an attacker hopped through multiple internal systems to steal credentials and other sensitive data, the company said in a security bulletin Sunday.
The attack, which didn’t originate at Vercel, showcases the pitfalls of interconnected cloud applications and SaaS integrations with overly privileged permissions.
An attacker traversed third-party systems and connections left exposed by employees before it hit the San Francisco-based company that created and maintains Next.js and other popular open-source libraries.
Researchers at Hudson Rock said the seeds of the attack were planted in February when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments.
Each of the companies are pinning at least some blame for the attack on the other vendor.
Context.ai on Sunday said that breach allowed the attacker to access its AWS environment and OAuth tokens for some users, including a token for a Vercel employee’s Google Workspace account. Vercel is not a Context customer, but the Vercel employee was using Context AI Office Suite and granted it full access, the artificial intelligence agent company said.
“The attacker used that access to take over the employee’s Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as sensitive,” Vercel said in its bulletin.
The company said a limited number of its customers are impacted and were immediately advised to rotate credentials. The company, which declined to answer questions, did not specify which internal systems were accessed or fully explain how the attacker gained access to Vercel customers’ credentials.
Vercel CEO Guillermo Rauch said customer data stored by the company is fully encrypted, yet the attacker got further access through enumeration, or by counting and inventorying specific variables.
“We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI,” he said in a post on X. “They moved with surprising velocity and in-depth understanding of Vercel.”
A threat group identifying themselves as ShinyHunters took responsibility for the attack in a post on Telegram and is attempting to sell the stolen data, which they claim includes access keys, source code and databases.
The attacker “is likely an imposter attempting to use an established name to inflate their notoriety,” Austin Larsen, principal threat analyst at Google Threat Intelligence, wrote in a LinkedIn post. “Regardless of the threat actor involved, the exposure risk is real.”
Vercel also warned that the attack on Context’s Google Workspace OAuth app “was the subject of a broader compromise, potentially affecting its hundreds of users across many organizations.” It published indicators of compromise and encouraged customers to review activity logs, review and rotate variables containing secrets.
Context and Vercel said their separate and coordinated investigations into the attack aided by CrowdStrike and Mandiant remain underway.
The post Vercel’s security breach started with malware disguised as Roblox cheats appeared first on CyberScoop. -
Serial-to-IP Converter Flaws Expose OT and Healthcare Systems to Hacking
Forescout researchers discovered 20 new vulnerabilities in Lantronix and Silex products and described theoretical attack scenarios.
The post Serial-to-IP Converter Flaws Expose OT and Healthcare Systems to Hacking appeared first on SecurityWeek. -
WhatsApp Leaks User Metadata to Attackers
Strangers can infer limited info about you without knowing or messaging you, which could theoretically aid certain kinds of malicious activity.
-
Vercel Breach Tied to Context AI Hack Exposes Limited Customer Credentials
Web infrastructure provider Vercel has disclosed a security breach that allows bad actors to gain unauthorized access to “certain” internal Vercel systems.
The incident stemmed from the compromise of Context.ai, a third-party artificial intelligence (AI) tool, that was used by an employee at the company.
“The attacker used that access to take over the employee’s Vercel Google Workspace account, -
Vercel confirms breach as hackers claim to be selling stolen data
Cloud development platform Vercel has disclosed a security incident after threat actors claimed to have breached its systems and are attempting to sell stolen data. […]
-
Tycoon 2FA Loses Phishing Kit Crown Amid Surge in Attacks
Threat actors are reusing Tycoon 2FA tools across other phishing kits following the platform’s disruption.
The post Tycoon 2FA Loses Phishing Kit Crown Amid Surge in Attacks appeared first on SecurityWeek. -
[Webinar] Eliminate Ghost Identities Before They Expose Your Enterprise Data
In 2024, compromised service accounts and forgotten API keys were behind 68% of cloud breaches. Not phishing. Not weak passwords. Unmanaged non-human identities that nobody was watching.
For every employee in your org, there are 40 to 50 automated credentials: service accounts, API tokens, AI agent connections, and OAuth grants. When projects end or employees leave, most -
$13.74M Hack Shuts Down Sanctioned Grinex Exchange After Intelligence Claims
Grinex, a Kyrgyzstan-incorporated cryptocurrency exchange sanctioned by the U.K. and the U.S. last year, said it’s suspending operations after it blamed Western intelligence agencies for a $13.74 million hack.
The exchange said it fell victim to what it described as a large-scale cyber attack that bore hallmarks of foreign intelligence agency involvement. This attack led to the theft of over 1



![[Webinar] Eliminate Ghost Identities Before They Expose Your Enterprise Data](https://mlnw8dv3x6c6.i.optimole.com/w:900/h:470/q:mauto/f:best/https://zerobreachsecurity.site/wp-content/uploads/2026/04/ghost-Sa40RB.jpg)
