Skip to main content
Mallory
Mallory

Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

autonomous agentsmalicious skillstoken theftself-hostedremote command executionunsafe defaultsdata leakageidentity securityexposed deploymentsweak authenticationattack surfaceself-propagatinggithubmisconfigurationoauth tokens
Updated February 20, 2026 at 04:00 PM24 sources
Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Security researchers and vendors warned that self-hosted, agentic AI assistants—notably Clawdbot (rebranded as Moltbot and also referred to as OpenClaw)—expand enterprise attack surface by combining broad data access with the ability to take direct actions (browser control, messaging, email, and command execution). Resecurity reported finding hundreds of exposed deployments reachable from the public Internet, frequently with weak authentication, unsafe defaults, or misconfigurations that could allow attackers to access API keys/OAuth tokens, retrieve private chat histories, and in some cases achieve remote command execution on the host. Dark Reading similarly highlighted that OpenClaw’s ecosystem can be undermined by malicious “skills” and fragile configuration/removal practices, reinforcing that these tools can be difficult to operate safely even when users attempt to limit permissions.

CyberArk framed the issue as an identity security problem: autonomous agents often run with user-level permissions and integrate with platforms like Slack, WhatsApp, and GitHub, creating pathways for credential/token theft, data leakage, and unauthorized actions if the agent is exposed to untrusted content or deployed without strong controls. In contrast, Dark Reading’s coverage of Shai-hulud focuses on a separate threat—self-propagating supply-chain worms targeting NPM projects—and is not directly about autonomous AI agents, though it underscores the broader risk of downstream compromise when widely used components or ecosystems are poisoned.

Sources

February 20, 2026 at 12:00 AM

5 more from sources like cyber security news, securitysenses blog, the hacker news, bank info security and govinfosecurity

Related Stories

Clawdbot Open-Source Agentic AI Assistants Raise Endpoint and Identity Security Risks

Clawdbot Open-Source Agentic AI Assistants Raise Endpoint and Identity Security Risks

The open-source agentic assistant **Clawdbot** rapidly went viral on GitHub (reported at ~24,000–25,000+ stars in a short period) and drew high-profile attention, with reports of engineers running it locally on always-on hardware such as **Mac minis**. Clawdbot is positioned as a “local-first” AI gateway that can be driven from common messaging platforms (e.g., Slack/Discord/Telegram) and can take real actions on a host—invoking terminals, running scripts, using a browser for web automation, and retaining “memory” over time—effectively operating with permissions similar to a human user account. Security commentary around Clawdbot emphasizes that agentic assistants change incident patterns because they can persist like service accounts while behaving like users, expanding the blast radius if compromised or misconfigured. Key risks highlighted include **shadow AI** adoption outside IT controls, inherited or over-granted permissions across chat and SaaS tools, data exposure via long-lived context/memory, and new attack paths such as prompt manipulation or “helpful” automation that executes unsafe actions on endpoints. The guidance focuses on SOC readiness: monitoring for unusual automation behaviors and access patterns consistent with an agent executing actions across endpoints and collaboration/SaaS environments, and treating these tools as a machine-identity and endpoint-control problem rather than a simple chatbot governance issue.

1 months ago
Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access

Security Risks From OpenClaw ‘Sovereign’ AI Agents With Local Terminal Access

**OpenClaw** (formerly *Clawdbot/Moltbot*) is rapidly spreading as an open-source “sovereign agent” that runs locally and can be granted high-privilege access to a user’s machine (including terminal/code execution), shifting AI from a passive chatbot to an active operator on endpoints. Trend Micro warns this model materially expands the attack surface by combining agent **access to files/commands**, **untrusted inputs** (e.g., messages/web/email), and **exfiltration paths**, and adds a fourth compounding risk—**persistence** via retained memory/state—creating conditions where prompt/instruction manipulation could translate into real system actions and data loss. Adoption is accelerating in China, where Shenzhen’s Longgang district proposed subsidies and an ecosystem to support OpenClaw-driven “one-person companies,” even as regulators and state media flag **data security and privacy** concerns tied to the tool’s ability to access personal and enterprise data. The reporting notes OpenClaw’s plug-in model support (including OpenAI, Anthropic, and Chinese model providers) and highlights official scrutiny amid China’s tightened data-privacy and export-control posture, underscoring that the primary risk is not a single vulnerability but the **operational security implications of deploying locally empowered AI agents** at scale.

Today
OpenClaw (ClawdBot/Moltbot) One-Click Remote Code Execution via Unsafe Gateway URL Handling

OpenClaw (ClawdBot/Moltbot) One-Click Remote Code Execution via Unsafe Gateway URL Handling

A **critical one-click remote code execution (RCE)** issue was reported in *OpenClaw* (also referred to as **ClawdBot/Moltbot**), an open-source AI “agent” assistant that runs with high local privileges and access to sensitive data (e.g., messaging apps and API keys). The described exploit chain abuses **unsafe URL parameter ingestion** (e.g., a `gatewayUrl` query parameter accepted without validation), persistence of attacker-controlled values (stored in `localStorage`), and an **automatic gateway connection** that transmits an `authToken` during the handshake—enabling **cross-site WebSocket hijacking** and ultimately unauthenticated code execution after a victim clicks a single malicious link. Reporting indicates the flaw has been **weaponized**, making it a practical drive-by compromise path for endpoints running the assistant. Separate reporting highlighted broader concerns with agentic/open-source AI tooling and deployments, including the security risks of highly privileged “AI that acts for you” and the growing attack surface created by exposed AI services. Research cited large-scale internet exposure of open-source LLM runtimes (e.g., **Ollama**) with tool-calling and weak guardrails, warning that a single vulnerability or misconfiguration could enable widespread abuse (resource hijacking, identity laundering, or remote execution of privileged operations). These themes reinforce that AI agents and self-hosted AI stacks should be treated as **critical infrastructure**, with strict input validation, hardened update/connection flows, and strong monitoring around token handling and outbound connections.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.