Skip to main content
Mallory
Mallory

Viral Moltbot AI Assistant Raises Security Concerns as Moltbook Agent Social Network Emerges

social networkmessaging platformsautonomous agentsagent architecturesautomated abuseai agentssandboxingtelegramemail integrationprompt injection
Updated January 31, 2026 at 12:04 AM2 sources
Viral Moltbot AI Assistant Raises Security Concerns as Moltbook Agent Social Network Emerges

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

The open-source AI assistant Moltbot (also referred to as OpenClaw) has gone viral due to its ability to autonomously perform real-world tasks on a user’s computer—interacting through common messaging platforms (e.g., iMessage, WhatsApp, Telegram, Discord, Slack, Signal) and integrating with personal accounts such as calendars and email. Coverage highlights that this broad access and autonomy materially increases risk, with recommendations to run the tool in an isolated environment (e.g., a dedicated machine) to reduce blast radius if the agent is compromised or behaves unexpectedly.

A companion project, Moltbook, has rapidly scaled into a Reddit-style social network where AI agents can post and interact without human intervention, reportedly reaching tens of thousands of registered agent users and generating large volumes of automated content across many subcommunities. Moltbook operates via a downloadable “skill” configuration (a prompt/config file) that enables agents to post via API, creating additional exposure to prompt/config supply-chain risks and automated abuse; reporting frames the ecosystem’s growth as occurring alongside “deep security issues” inherent in highly-permissioned, plugin/skill-driven agent architectures.

Related Entities

Organizations

Related Stories

Moltbot AI Assistant Adoption Drives Security Risks and Malware Impersonation

Moltbot AI Assistant Adoption Drives Security Risks and Malware Impersonation

The open-source agentic AI assistant **Moltbot** (formerly *Clawdbot*) rapidly gained developer adoption, but security researchers and media reporting warned that its “always-on” design and deep integrations can require broad access to sensitive accounts and credentials across messaging platforms and services. Reported risks include insecure deployments and misconfigurations that leave instances exposed to the internet, weak secret-handling practices (including plaintext storage on local filesystems), and the broader challenge that agentic tools can bypass traditional security boundaries unless operators implement strong controls such as least-privilege access, monitoring, encryption-at-rest, and sandboxing/containerization. Attackers also capitalized on Moltbot’s popularity by publishing a **fake Moltbot/Clawdbot VS Code extension** on Microsoft’s official Marketplace, despite Moltbot not having an official extension. The malicious extension (`clawdbot.clawdbot-agent`) was designed to run on IDE launch, fetch `config.json` from `clawdbot.getintwopc[.]site`, execute a dropped binary (`Code.exe`), and install a legitimate remote access tool (**ConnectWise ScreenConnect**) that connected to `meeting.bulletmailer[.]net:8041` for persistent attacker access; Microsoft removed the extension after it was reported.

1 months ago
Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Security Risks From Self-Hosted Autonomous AI Agents (Clawdbot/Moltbot/OpenClaw)

Security researchers and vendors warned that **self-hosted, agentic AI assistants**—notably **Clawdbot** (rebranded as **Moltbot** and also referred to as **OpenClaw**)—expand enterprise attack surface by combining broad data access with the ability to take direct actions (browser control, messaging, email, and command execution). Resecurity reported finding **hundreds of exposed deployments** reachable from the public Internet, frequently with **weak authentication, unsafe defaults, or misconfigurations** that could allow attackers to access **API keys/OAuth tokens**, retrieve **private chat histories**, and in some cases achieve **remote command execution** on the host. Dark Reading similarly highlighted that OpenClaw’s ecosystem can be undermined by **malicious “skills”** and fragile configuration/removal practices, reinforcing that these tools can be difficult to operate safely even when users attempt to limit permissions. CyberArk framed the issue as an **identity security** problem: autonomous agents often run with **user-level permissions** and integrate with platforms like *Slack*, *WhatsApp*, and *GitHub*, creating pathways for **credential/token theft, data leakage, and unauthorized actions** if the agent is exposed to untrusted content or deployed without strong controls. In contrast, Dark Reading’s coverage of **Shai-hulud** focuses on a separate threat—**self-propagating supply-chain worms targeting NPM projects**—and is not directly about autonomous AI agents, though it underscores the broader risk of downstream compromise when widely used components or ecosystems are poisoned.

3 weeks ago
Moltbook Data Exposure and Emerging Risk of Viral AI Prompt Worms

Moltbook Data Exposure and Emerging Risk of Viral AI Prompt Worms

Security researchers reported a major data exposure affecting **Moltbook**, an AI-agent-focused social network used by autonomous agents such as **OpenClaw**. According to a **Wiz** analysis, misconfigured *Supabase* backend controls—specifically an exposed Supabase API key in client-side JavaScript combined with missing **Row Level Security (RLS)**—allowed database access and schema enumeration via **GraphQL**, resulting in exposure of **~4.75 million records**. The leaked data reportedly included **~1.5 million API authorization tokens**, **tens of thousands of human email addresses**, **4,060 private messages between agents**, and **OpenAI API keys stored in plaintext** within some messages, creating a direct risk of account takeover/agent impersonation and downstream API abuse. Separate reporting highlighted the broader security implications of rapidly spreading, “viral” **prompt-based worms** in agentic AI ecosystems, noting that today’s major model providers can sometimes disrupt malicious agent activity through API monitoring and key termination, but that this control diminishes as capable **local models** become more accessible. A third item referenced **CVE-2026-24763** (an authenticated command injection issue in OpenClaw’s Docker execution via the `PATH` environment variable), but the provided material does not include substantive details tying it to the Moltbook exposure or the prompt-worm discussion beyond the shared OpenClaw ecosystem context.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.

Viral Moltbot AI Assistant Raises Security Concerns as Moltbook Agent Social Network Emerges | Mallory