The OpenClaw Security Crisis: Nine CVEs in Four Days and Why AI Agents Are the New Attack Frontier
Imagine browsing a website on your lunch break. A random link from social media catches your eye. You click it. Within seconds, an attacker has full control of your AI agent - reading your emails, accessing your files, and impersonating you in conversations. No phishing email. No malware download. Just a malicious website and a vulnerability in the AI assistant you trusted to manage your digital life.
This is not a hypothetical scenario. This is exactly what happened with OpenClaw, the viral AI agent that became one of the fastest-growing open-source projects in GitHub history. Between March 18 and March 21, 2026, nine CVEs were publicly disclosed for OpenClaw. One scored a 9.9 out of 10 on the CVSS scale. Six were high severity. Two medium. One critical.
The OpenClaw security crisis represents something larger than a single vulnerable project. It signals the arrival of a new attack frontier - one where AI agents with broad system access become the primary targets for sophisticated adversaries. If you are running an AI agent in your environment, or planning to deploy one, what you learn in the next few minutes could prevent a catastrophic breach.
What Is OpenClaw and Why Did It Explode in Popularity?
OpenClaw (originally called Clawdbot and Moltbot after trademark disputes) is an open-source AI agent created by developer Peter Steinberger. Unlike traditional AI assistants that simply answer questions, OpenClaw is autonomous. It can execute shell commands, read and write files, browse the web, send emails, manage calendars, and take actions across your digital life without constant human supervision.
Users interact with OpenClaw through messaging platforms like WhatsApp, Slack, Telegram, Discord, and iMessage. The agent runs locally and connects to large language models like Claude or GPT. Its persistent memory feature means it remembers context across sessions, learning your preferences and habits over time.
The appeal is immediate and powerful. An AI assistant that takes action on your behalf, running on your own hardware, with access to everything you do digitally. People are buying dedicated hardware just to run OpenClaw around the clock. The project amassed over 135,000 GitHub stars within weeks of launch, becoming one of the fastest-growing repositories in GitHub's history.
But that capability comes with serious consequences, and it did not take long for them to surface.
The Cascade of Security Failures: A Timeline of Disaster
Within just two weeks of going viral, OpenClaw was associated with a growing number of security incidents that escalated in both scope and severity. These issues ranged from traditional vulnerabilities to exposed management interfaces and the distribution of malicious skills. Individually, each would be concerning. Taken together, they illustrate why AI agents with broad system access represent a fundamentally new risk category.
January 27-29, 2026: The ClawHavoc Malware Campaign
Attackers distributed 335 malicious skills via ClawHub, OpenClaw's public marketplace. These skills used professional documentation and innocuous names like "solana-wallet-tracker" to appear legitimate, then instructed users to run external code that installed keyloggers on Windows or Atomic Stealer malware on macOS.
Researchers later confirmed 341 malicious skills total out of 2,857 - meaning roughly 12% of the entire registry was compromised. This was not a supply-chain attack in the traditional sense. This was a platform-level compromise where attackers exploited the trust users placed in the official marketplace.
January 30, 2026: The One-Click RCE Patch
OpenClaw released version 2026.1.29, patching CVE-2026-25253 before public disclosure. The vulnerability allowed one-click remote code execution via a malicious link. The vulnerability exploited the Control UI's trust of URL parameters without validation, enabling attackers to hijack instances via cross-site WebSocket hijacking - even those configured to listen only on localhost.
A CVSS score of 8.8 means this was not a minor issue. It was a critical flaw that could give attackers complete control of any OpenClaw instance with a single click.
January 31, 2026: Massive Internet Exposure
Censys identified 21,639 exposed instances publicly accessible on the internet, up from approximately 1,000 just days earlier. The United States had the largest share of exposed deployments, followed by China, where an estimated 30% of instances were running on Alibaba Cloud.
Misconfigured instances were found leaking API keys, OAuth tokens, and plaintext credentials. These were not sophisticated attacks. These were basic misconfigurations exposed to the entire internet, with sensitive credentials freely accessible to anyone who looked.
January 31, 2026: The Moltbook Breach
That same week, Moltbook - a social network built exclusively for OpenClaw agents - was found to have an unsecured database exposing 35,000 email addresses and 1.5 million agent API tokens. The platform, which had grown to over 770,000 active agents, demonstrated how quickly an unvetted ecosystem can compound risk.
Think about that for a moment. A social network for AI agents. With 770,000 active agents. And an unsecured database exposing credentials. This is the scale at which AI agent ecosystems are growing, and the security maturity is not keeping pace.
February 3, 2026: Full Disclosure
CVE-2026-25253 was publicly disclosed with its CVSS score of 8.8. The same day, OpenClaw issued three high-impact security advisories: the one-click RCE vulnerability and two command injection vulnerabilities. Security researchers confirmed the attacks were being actively exploited in the wild.
The March 2026 Vulnerability Flood: Nine CVEs in Four Days
If January and February were warning shots, March was the full-scale assault. Between March 18 and March 21, 2026, nine CVEs were publicly disclosed for OpenClaw. The jgamblin/OpenClawCVEs tracker now lists 156 total security advisories, with 128 still awaiting CVE assignment.
Here is the full scorecard:
| CVE | CVSS | Date | What It Does | Patched In |
|---|---|---|---|---|
| CVE-2026-22171 | 8.2 High | Mar 18 | Path traversal in Feishu media download - arbitrary file write | 2026.2.19 |
| CVE-2026-28460 | 5.9 Medium | Mar 19 | Allowlist bypass via shell line-continuation - command injection | 2026.2.22 |
| CVE-2026-29607 | 6.4 Medium | Mar 19 | Allow-always wrapper bypass - approve safe command, swap payload, RCE | 2026.2.22 |
| CVE-2026-32032 | 7.0 High | Mar 19 | Untrusted SHELL env variable - arbitrary shell execution on shared hosts | 2026.2.22 |
| CVE-2026-32025 | 7.5 High | Mar 19 | WebSocket brute-force, no rate limiting - full session hijack from browser | 2026.2.25 |
| CVE-2026-22172 | 9.9 Critical | Mar 20 | WebSocket scope self-declaration - low-priv user becomes full admin | 2026.3.12 |
| CVE-2026-32048 | 7.5 High | Mar 21 | Sandbox escape - sandboxed sessions spawn unsandboxed children | 2026.3.1 |
| CVE-2026-32049 | 7.5 High | Mar 21 | Oversized media payload DoS - crash the service remotely, no auth needed | 2026.2.22 |
| CVE-2026-32051 | 8.8 High | Mar 21 | Privilege escalation - operator.write scope reaches owner-only surfaces | 2026.3.1 |
Four days. Nine vulnerabilities. And a very uncomfortable spotlight on the self-hosting security model.
The Worst One: CVE-2026-22172 - Self-Declared Admin Access
A 9.9 CVSS score is about as bad as it gets. Here is how it works.
When connecting to OpenClaw's gateway via WebSocket using shared-token or password auth, the server lets the client declare its own scopes during the handshake. Log in as a regular user. Tell the server "I am operator.admin." The server says "okay."
No exploit toolkit. No buffer overflow. No race condition. You just ask. Full administrative access - gateway operations, cron management, everything.
The HackerWire called it a "self-declaration" vulnerability, which is a diplomatic way of saying the authorization check was not there. Patched in v2026.3.12 on March 13. If you are running anything older, any authenticated user on your instance is one WebSocket message away from admin.
The Browser Attack: CVE-2026-32025 - ClawJacked
Discovered by Oasis Security researchers, this vulnerability is clever and deeply unsettling.
OpenClaw's gateway had no rate limiting on authentication attempts from localhost. Sounds fine - until you remember browsers can open WebSocket connections to localhost. A malicious website you visit can:
- Connect to your local OpenClaw gateway
- Brute-force the password at hundreds of attempts per second
- Exploit the fact that localhost connections auto-approve device pairing
Full session access. Your agent compromised because you opened the wrong browser tab.
The attack chain works like this: JavaScript on a malicious page opens a WebSocket connection to localhost on the OpenClaw gateway port (permitted because WebSocket connections to localhost are not blocked by cross-origin policies). The script brute-forces the gateway password at hundreds of attempts per second. The gateway's rate limiter exempts localhost connections entirely. Once authenticated, the script silently registers as a trusted device.
Your AI agent - with access to your files, emails, and accounts - is now controlled by a website you visited.
The Sandbox That Was Not: CVE-2026-32048
OpenClaw's sandbox mode is one of the features people cite when arguing it is safe to self-host. Turns out it had a fundamental flaw.
When a sandboxed session spawns a child process through sessions_spawn, OpenClaw failed to inherit sandbox restrictions. The child runs with sandbox.mode: off. A compromised sandboxed agent escapes confinement entirely - arbitrary code execution, data access, and DoS all on the table.
This is especially ironic given that NVIDIA built NemoClaw specifically to add better sandboxing around OpenClaw for enterprise use. The sandbox you trusted to protect you was essentially transparent to determined attackers.
Why AI Agents Represent a New Attack Frontier
Traditional software vulnerabilities are bad. AI agent vulnerabilities are catastrophic. Here is why.
Scope of Access
A typical application vulnerability might let an attacker read files in a specific directory, or access a particular database. An AI agent vulnerability gives an attacker access to everything the agent can do - which often includes executing arbitrary commands, reading any file, sending messages as you, and accessing all your accounts.
OpenClaw instances were found with access to email accounts, cloud provider credentials, internal databases, and source code repositories. A single vulnerability does not just compromise the agent. It compromises your entire digital identity.
Persistence and Memory
AI agents remember. That is their value proposition. But it also means that a compromised agent retains the attacker's influence across sessions. Poisoned memory entries can persistently hijack workflows. Researchers achieved 90%+ attack success rates against major models using memory poisoning techniques.
Once an attacker controls an agent's memory, they do not just have access today. They have access tomorrow, next week, and next month - until you discover and remediate the compromise.
Autonomous Action
Traditional malware requires the attacker to actively control it. AI agents act autonomously. An attacker who compromises an agent can set it to perform actions on their behalf without ongoing interaction.
Imagine an attacker who instructs your compromised agent to "check my email every hour and forward any messages containing 'password' or 'login' to this address." The agent complies. Autonomously. Forever.
Trust and Social Engineering
People trust their AI agents. You tell your agent things you would not tell another person. You give it access because it is "yours" and it "helps you." This trust makes AI agents perfect targets for social engineering at scale.
The ClawHavoc campaign succeeded because users trusted the ClawHub marketplace. They installed skills that looked legitimate because the platform itself was legitimate. The attackers exploited trust in the ecosystem.
The RSA Conference 2026 Context: Agentic AI Security Takes Center Stage
The timing of these disclosures is not coincidental. RSA Conference 2026, held in early April, has made agentic AI security a central theme. Adversa AI was named "Most Innovative Agentic AI Security" platform at the Global InfoSec Awards during RSA Conference 2026.
The conversation has shifted from theoretical risks to active, infrastructure-level threats. We are seeing a surge in advanced attacks, from multi-agent offensive behaviors to serious vulnerabilities in widely deployed tools like OpenClaw and Copilot.
Research presented at RSA and published in the weeks leading up to it reveals the scope of the problem:
- 93% of 30 AI agent frameworks rely on unscoped API keys
- 0% have per-agent identity
- 97% lack user consent mechanisms
- 22 distinct techniques of indirect prompt injection have been observed in the wild
- 90%+ attack success rates against major models using memory poisoning
As agents gain more autonomy and access, the need for strong, agent-specific defense mechanisms and identity governance is now pressing.
The CISO Benchmark Report: AI as the Leading Source of Friction
The 2026 CISO Benchmark Report from the Retail and Hospitality ISAC and IANS, released April 1, 2026, confirms what security leaders already know: AI has become the most significant new challenge facing security teams.
This year, AI tops the list of friction points for CISOs, ahead of ransomware and phishing. 71% of respondents identified AI as a primary concern, citing risks such as data leakage, insider misuse, and insufficient governance controls.
At the same time, organizations are increasingly integrating AI into their security operations, particularly for threat detection, analysis, and reporting. Despite these advances, CISOs emphasized that AI is compounding, not replacing, existing threats, adding complexity to an already demanding cybersecurity landscape.
70% of CISOs reported that AI has been added to their scope of responsibility. Nearly 90% expect spending on AI-related security initiatives to rise, often through reallocating existing budgets rather than adding new funds.
Defense Strategies: Protecting Yourself in the AI Agent Era
If you are running OpenClaw or any AI agent, here are the immediate steps you need to take.
Update Immediately
If you are running OpenClaw, update to version 2026.3.12 or later immediately. The fix for CVE-2026-22172 and the other critical vulnerabilities is included in this version. Running older versions means you are vulnerable to known, actively exploited attacks.
Network Segmentation
Do not expose your AI agent to the internet unless absolutely necessary. If you must expose it, use a VPN or zero-trust access solution. The 21,639 exposed OpenClaw instances found by Censys represent 21,639 unnecessary attack surfaces.
Strong Authentication
Use strong, unique passwords for your AI agent gateway. Enable multi-factor authentication if available. The ClawJacked vulnerability succeeded because the gateway had no rate limiting and weak passwords could be brute-forced quickly.
Principle of Least Privilege
Limit what your AI agent can access. Does it really need access to your entire file system? Your entire email history? Your production databases? Restrict access to only what is necessary for the agent to perform its intended functions.
Monitoring and Logging
Monitor your AI agent's activity. Look for unusual patterns - commands you did not initiate, files being accessed at odd hours, network connections to unexpected destinations. The sooner you detect a compromise, the less damage it can do.
Skill Vetting
If you are using skills or plugins from a marketplace, vet them carefully. The ClawHavoc campaign showed that attackers can create professional-looking skills that are actually malware. Only install skills from trusted sources, and review what permissions they request.
Sandboxing (With Caveats)
Run your AI agent in a sandboxed environment. But remember CVE-2026-32048 - sandboxes can have vulnerabilities too. Do not rely on sandboxing as your only defense layer.
Regular Security Audits
Conduct regular security audits of your AI agent deployment. Check for exposed ports, weak credentials, unnecessary permissions, and outdated versions. The OpenClaw vulnerability flood was discovered by security researchers - but attackers were likely aware of them too.
The Future of AI Agent Security
The OpenClaw security crisis is not an isolated incident. It is a preview of what is coming as AI agents become more capable, more autonomous, and more deeply integrated into our digital lives.
Microsoft, NVIDIA, and other major vendors are racing to build secure AI agent platforms. Microsoft's Zero Trust for AI framework extends proven security principles to autonomous agents. NVIDIA's NemoClaw aims to add enterprise-grade sandboxing to OpenClaw-style agents.
But the fundamental challenge remains: AI agents need broad access to be useful, and broad access creates broad attack surfaces. Solving this tension is the defining cybersecurity challenge of 2026.
FAQ: OpenClaw and AI Agent Security
What is OpenClaw?
OpenClaw is an open-source AI agent that can execute shell commands, read and write files, browse the web, send emails, and take autonomous actions across your digital life. It became one of the fastest-growing GitHub repositories in history, amassing over 135,000 stars within weeks.
How many vulnerabilities have been discovered in OpenClaw?
As of April 2026, the jgamblin/OpenClawCVEs tracker lists 156 total security advisories, with 128 still awaiting CVE assignment. Nine CVEs were disclosed in a four-day period in March 2026 alone, including one critical vulnerability with a 9.9 CVSS score.
What is the most serious OpenClaw vulnerability?
CVE-2026-22172, disclosed March 20, 2026, has a CVSS score of 9.9. It allows any authenticated user to declare themselves an administrator by simply specifying "operator.admin" scope during WebSocket connection. No exploit toolkit required - you just ask for admin access and the server grants it.
What is the ClawJacked vulnerability?
CVE-2026-32025, discovered by Oasis Security, allows malicious websites to hijack OpenClaw agents by brute-forcing the gateway password through WebSocket connections to localhost. Because browsers can connect to localhost and the gateway had no rate limiting, attackers could gain full session access without user interaction.
Is OpenClaw safe to use?
OpenClaw can be used safely if properly configured and kept up to date. Users should update to version 2026.3.12 or later, avoid exposing the gateway to the internet, use strong authentication, limit the agent's access privileges, and monitor for unusual activity.
What is an AI agent?
An AI agent is an autonomous software system powered by large language models that can take actions on behalf of users. Unlike traditional chatbots that only provide information, AI agents can execute commands, access systems, and perform tasks without constant human supervision.
Why are AI agents a security risk?
AI agents require broad system access to be useful, which creates broad attack surfaces. A compromised AI agent can access everything the user can access - files, emails, accounts, databases. Additionally, AI agents act autonomously and retain memory across sessions, meaning compromises can persist and operate without ongoing attacker interaction.
What is the ClawHavoc malware campaign?
In January 2026, attackers distributed 341 malicious skills through OpenClaw's ClawHub marketplace. These skills appeared legitimate but installed keyloggers and info-stealing malware on users' systems. Approximately 12% of the entire ClawHub registry was compromised.
How can I secure my AI agent deployment?
Key security measures include: keeping the agent updated, using strong authentication, implementing network segmentation, following the principle of least privilege, monitoring agent activity, vetting any skills or plugins, using sandboxing, and conducting regular security audits.
What is the RSA Conference 2026 saying about AI agent security?
RSA Conference 2026 has made agentic AI security a central theme. Research presented shows 93% of AI agent frameworks rely on unscoped API keys, 0% have per-agent identity, and 97% lack user consent mechanisms. The industry is recognizing AI agent security as a critical priority.
Are other AI agents vulnerable too?
Yes. Research from Adversa AI, Unit 42, and others has identified vulnerabilities in multiple AI agent frameworks including CrewAI, AutoGen, and various enterprise AI platforms. The OpenClaw vulnerabilities are part of a broader pattern of security challenges in the AI agent ecosystem.
What should CISOs know about AI agent security?
According to the 2026 CISO Benchmark Report, 71% of security leaders identify AI as a primary concern, ahead of ransomware and phishing. 70% report that AI has been added to their scope of responsibility, and nearly 90% expect AI security spending to increase. AI is compounding existing threats rather than replacing them.
Conclusion: The AI Agent Security Era Has Arrived
The OpenClaw security crisis is a wake-up call. AI agents are no longer experimental toys. They are production systems with production-level vulnerabilities and production-level consequences when those vulnerabilities are exploited.
Nine CVEs in four days. A 9.9 CVSS critical vulnerability. 156 total security advisories. 21,639 exposed instances on the internet. 341 malicious skills in the official marketplace. These numbers tell a clear story: AI agent security is not a future problem. It is a right now problem.
If you are running an AI agent, update it today. Segment it from your network. Limit its access. Monitor its activity. Treat it with the same security rigor you would apply to any production system with access to your most sensitive data.
The AI agent revolution is here. The security challenges that come with it are here too. The organizations that recognize this reality and act on it will be the ones that thrive in the agentic AI era. The ones that do not will become cautionary tales.
Your AI agent is powerful. It is also a target. Secure it accordingly.
Stay informed about the latest AI security threats and defenses. Follow our blog for weekly updates on the evolving cybersecurity landscape, or contact our team for a comprehensive AI security assessment.