Skip to main content
Mallory
Mallory

OpenClaw Security Concerns and Command Injection Risk

openclawopenclaw-as-a-serviceauthenticated command injectionexploit riskcommand injectioninsecure by designcredential exposurevulnerabilitycontainer executiondockerpath manipulationhosted deploymentsenvironment variablestencent cloud
Updated February 6, 2026 at 03:00 PM3 sources
OpenClaw Security Concerns and Command Injection Risk

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Cloud providers began rapidly shipping OpenClaw-as-a-service deployments despite warnings that the AI agent platform is “demonstrably insecure.” OpenClaw is designed to act on users’ behalf across online services (e.g., email and calendars) by taking user credentials and executing instructions via messaging apps such as Telegram or WhatsApp; this model increases blast radius if the platform is compromised. Tencent Cloud, DigitalOcean, and Alibaba Cloud published quick-deploy options (including one-click installers and low-cost small-server templates), effectively lowering the barrier to running OpenClaw in hosted environments.

Separately, CVE-2026-24763 describes an authenticated command injection condition tied to OpenClaw’s Docker execution behavior via manipulation of the PATH environment variable, indicating a concrete exploitation avenue beyond general “insecure by design” concerns. In combination, the rapid commoditization of hosted OpenClaw deployments and the presence of a command-injection class vulnerability heighten the likelihood of real-world abuse, particularly where OpenClaw instances are granted broad credentials and automation permissions.

Related Entities

Organizations

Related Stories

Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access

Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access

**OpenClaw** (formerly *Clawdbot/Moltbot*) is rapidly spreading as an open-source “sovereign agent” that runs locally and can be granted high-privilege access to a user’s machine (including terminal/code execution), shifting AI from a passive chatbot to an active operator on endpoints. Trend Micro warns this model materially expands the attack surface by combining agent **access to files/commands**, **untrusted inputs** (e.g., messages/web/email), and **exfiltration paths**, and adds a fourth compounding risk—**persistence** via retained memory/state—creating conditions where prompt/instruction manipulation could translate into real system actions and data loss. Adoption is accelerating in China, where Shenzhen’s Longgang district proposed subsidies and an ecosystem to support OpenClaw-driven “one-person companies,” even as regulators and state media flag **data security and privacy** concerns tied to the tool’s ability to access personal and enterprise data. The reporting notes OpenClaw’s plug-in model support (including OpenAI, Anthropic, and Chinese model providers) and highlights official scrutiny amid China’s tightened data-privacy and export-control posture, underscoring that the primary risk is not a single vulnerability but the **operational security implications of deploying locally empowered AI agents** at scale.

Today
OpenClaw AI Agent Skills Abused for Credential Exposure and Prompt-Injection Backdooring

OpenClaw AI Agent Skills Abused for Credential Exposure and Prompt-Injection Backdooring

Security researchers and media reports warned that the open-source AI agent **OpenClaw** (formerly *Moltbot/Clawdbot*) can be abused via its *ClawHub* “skills” ecosystem, with findings that **~7.1% of marketplace skills** contributed to exposure of **API keys, credentials, and credit card data** due to problematic `SKILL.md` instructions. Snyk highlighted a particularly severe example, **buy-anything skill v2.0.0**, which performs credit-card “tokenization” in a way that can be used to **pilfer financial details** before prompting users to provide card information. Additional research described **indirect prompt-injection** risk: a malicious Google document can coerce OpenClaw into integrating a new **Telegram bot**, enabling follow-on actions such as **file exfiltration** and deployment of a **Sliver** command-and-control beacon for persistence, with potential for **privilege escalation, lateral movement, and ransomware execution**. Separately, one report noted OpenClaw’s move to scan skills with **VirusTotal**, but also emphasized that signature-based scanning is not a complete mitigation for **prompt-injection** and other logic-level abuses; other items in the same news roundup (e.g., telecom “Salt Typhoon” oversight) were unrelated to OpenClaw’s vulnerabilities.

1 months ago
OpenClaw (ClawdBot/Moltbot) One-Click Remote Code Execution via Unsafe Gateway URL Handling

OpenClaw (ClawdBot/Moltbot) One-Click Remote Code Execution via Unsafe Gateway URL Handling

A **critical one-click remote code execution (RCE)** issue was reported in *OpenClaw* (also referred to as **ClawdBot/Moltbot**), an open-source AI “agent” assistant that runs with high local privileges and access to sensitive data (e.g., messaging apps and API keys). The described exploit chain abuses **unsafe URL parameter ingestion** (e.g., a `gatewayUrl` query parameter accepted without validation), persistence of attacker-controlled values (stored in `localStorage`), and an **automatic gateway connection** that transmits an `authToken` during the handshake—enabling **cross-site WebSocket hijacking** and ultimately unauthenticated code execution after a victim clicks a single malicious link. Reporting indicates the flaw has been **weaponized**, making it a practical drive-by compromise path for endpoints running the assistant. Separate reporting highlighted broader concerns with agentic/open-source AI tooling and deployments, including the security risks of highly privileged “AI that acts for you” and the growing attack surface created by exposed AI services. Research cited large-scale internet exposure of open-source LLM runtimes (e.g., **Ollama**) with tool-calling and weak guardrails, warning that a single vulnerability or misconfiguration could enable widespread abuse (resource hijacking, identity laundering, or remote execution of privileged operations). These themes reinforce that AI agents and self-hosted AI stacks should be treated as **critical infrastructure**, with strict input validation, hardened update/connection flows, and strong monitoring around token handling and outbound connections.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.