Any Chrome Extension Just Hijacked Claude. With Zero Permissions.
Key Takeaways
- ClaudeBleed lets any Chrome extension, even one with zero declared permissions, take full control of Claude's browser agent. It can read your Google Drive, send from your Gmail, steal GitHub repos, and wipe the logs. - Anthropic patched the flaw on May 6. A LayerX researcher broke the patch in 3 hours. The underlying trust problem did not get fixed. - "Act without asking" mode is still exposed. If you run Claude in Chrome with autonomous actions enabled, you are the attack surface right now. - This is a new class of attack. It lies to the AI about what buttons say, not to the user. Traditional security tools do not see it.
---
LayerX published ClaudeBleed on May 8. By the time you read this, it will be old news to some of you and news to most.
Here is what actually happened and why it matters more than the average vulnerability disclosure.
Anthropic's Chrome extension trusted any script running on claude.ai.
No verification. No origin check. If you have another extension installed, even one that asked for nothing, it could send commands to Claude's LLM and Claude would answer them. Google Drive files. Gmail messages. GitHub source code. All of it. Zero user interaction required at any point.
Anthropic shipped a patch on May 6, version 1.0.70.
LayerX researcher Aviad Gispan broke it in three hours. Not because he found a clever bypass. As the patch fixed the UI without fixing the trust boundary. The underlying architecture stayed the same. Switch the extension to privileged mode without notifying the user, and the hole is still there.
The Part That Should Scare You
This is not a normal vulnerability. The attacker's trick was approval looping and DOM manipulation. Scripts inside the extension repeatedly clicked "Yes" on Claude's confirmation dialogs. And they renamed UI buttons in the page so Claude thought it was clicking "Request Feedback" when it was actually clicking "Share."
Read that again. Claude read the pixels and saw "Request Feedback." The button said "Request Feedback." But it was actually "Share." And Claude clicked it.
Ax Sharma from Manifold Security called this the sophisticated part: the agent's perceived environment was manipulated to produce actions that looked completely legitimate from the inside.
The AI was not broken. It was lied to about what it was reading. This is a new attack surface. Traditional security tools do not catch it since nothing malicious is executing. The AI is doing everything right. It just does not know it has been conned.
"Act Without Asking" Is the Danger Setting
The extension, version 1.0.69, shipped April 22, 2026. It is in beta for Claude Pro subscribers. One of the modes lets Claude act without asking for your approval on each step. Convenient. Dangerous.
If you run this mode, any installed extension can command your AI agent to do anything it has access to do. And Claude will do it without stopping to check with you.
The patch did not close this gap.
Anthropic told LayerX the bug was a duplicate of an internally known issue. That is not reassuring. An internally known issue that sat unfixed long enough for LayerX to find it externally, get a patch shipped that a researcher broke in an afternoon.
What This Means for Small Teams
I run Claude agents against client Google Drives and GitHub repos. Daily. If you do the same, you are in the blast radius. Your AI agent has access to your email, your files, your code. You are trusting it the way you would trust a contractor with a key to your office. Now imagine someone slipped that contractor a fake memo and he handed over the filing cabinets without calling you first.
That is what happened here.
The takeaway is not "Anthropic is bad." The takeaway is that AI agent security is a category that did not exist six months ago.
When your security model depends on an LLM correctly reading button labels, you have a trust model that is fundamentally different from anything traditional tooling is built for.
What You Should Do Right Now
Turn off "Act without asking" mode in the Chrome extension. Now. Check what other extensions you have installed. A zero-permission extension cannot be detected by your browser's permission model. That is the whole point of this attack.
If you are building AI agents that take actions on behalf of users, your trust boundary cannot live in the UI.
It has to be cryptographic. Signed requests. One-time tokens. Verified action schemas. Every approval flow that depends on an LLM reading text on a page is a potential manipulation point. ClaudeBleed is proof that attackers understand this. Your move.
---
Sources: - LayerX Security. ClaudeBleed Disclosure - CyberScoop. Claude Chrome Extension Allows Plugins to Hijack AI - HackRead — ClaudeBleed Vulnerability - CyberNews — Claude Code Chrome Extension Flaw Fix Hacked - SQ Magazine. ClaudeBleed Chrome Extension Hijack
Comments ()