Multiple Misconfiguration and Access-Control Flaws Expose AI and SaaS Platforms to Data Theft and Account Takeover
Security researchers reported a critical Moltbook exposure caused by an unauthenticated database/API access issue that allowed enumeration of agent records (e.g., GET /api/agents/{id}) and leakage of email addresses, JWT login_tokens, and third-party api_keys, enabling agent hijacking and downstream abuse of connected services. Separately, Cal.com Cloud was found vulnerable to a chained set of broken access controls and signup/invite-token logic flaws that enabled complete account takeover and access to sensitive booking data (attendee details, emails, and booking histories) at scale, including organizational accounts.
In parallel, SentinelLabs documented that roughly 175,000 internet-exposed Ollama instances were reachable due to common deployment misconfiguration (binding to 0.0.0.0/public interfaces), creating conditions for arbitrary code execution and access to external resources—especially where tool-calling features were enabled. A distinct IoT case study described Molekule air purifiers exposing fleet-wide telemetry because an AWS Cognito Identity Pool permitted unauthenticated access to AWS IoT Core MQTT subscriptions, leaking device shadow data (e.g., Wi‑Fi SSIDs, MAC addresses, device names, sensor readings) for ~100,000 devices; the disclosed policy reportedly allowed read/subscribe access but not device control without per-device certificates.
Sources
Related Stories

Insecure Public Exposure of Self-Hosted AI Infrastructure (Ollama and MCP Servers)
Security researchers and media reporting highlighted widespread **public exposure of self-hosted AI infrastructure** caused by rushed, poorly governed deployments. Reporting cited **14,000+ internet-accessible Ollama inference servers**, with one analysis estimating **~20%** hosting models susceptible to unauthorized access, and separate findings identifying **10,000+ Ollama servers** exposed **without any authentication**—often due to developers binding services to all interfaces or standing up local inference/gateway components (e.g., *LiteLLM*, *vLLM*) outside normal asset inventories. The net effect is “shadow AI” that creates material blind spots for security teams and increases the likelihood of unauthorized model access, data exposure, and abuse of internal AI services. In parallel, enterprise adoption of **Model Context Protocol (MCP) servers**—which bridge LLMs to internal tools and data—has introduced similar exposure risk when deployed without access controls. Guidance and analysis noted that MCP, introduced as an open standard without native role restrictions, leaves security implementation to operators; researchers reportedly identified **nearly 2,000 MCP servers** on the open web with **no security controls**, increasing risk of unauthorized access, data loss, and potentially **arbitrary command execution** via overly privileged integrations. A vendor announcement positioned an AI-agent governance platform (*MintMCP*) as a response to these visibility and control gaps (audit trails, policy enforcement, access controls), but it primarily serves as product marketing rather than independent incident reporting.
1 months ago
Multiple High-Severity Vulnerability Disclosures Across ICS, Open-Source Software, and SOHO Routers
Public disclosures highlighted multiple high-severity vulnerabilities across industrial control systems, open-source software, and consumer networking gear, with several issues enabling **unauthenticated remote compromise**. Johnson Controls disclosed **CVE-2025-26385** (CVSS 10.0), a critical SQL injection affecting multiple building/ICS management products (including *ADS/ADX, LCS8500, NAE8500, SCT, CCT*) that can allow remote, unauthenticated attackers to execute arbitrary SQL to alter/delete/exfiltrate data; CISA guidance emphasized isolating control system networks from the internet, segmentation, and controlled remote access (e.g., VPNs). Additional unauthenticated remote issues include **CVE-2026-25069** in *SunFounder Pironman Dashboard* (path traversal in log API endpoints enabling arbitrary file read/deletion) and **CVE-2025-51958** in the *DokuWiki* `runcommand` plugin (unauthenticated command execution via `lib/plugins/runcommand/postaction.php`). Other disclosures include developer-tooling and application-layer injection flaws and multiple router memory-corruption bugs with public exploit references. *Orval* fixed **CVE-2026-25141**, a code-injection issue where incomplete escaping can be bypassed using **JSFuck**-style payloads, and *Cybersecurity AI (CAI)* addressed **CVE-2026-25130**, where `subprocess.Popen(..., shell=True)` enables argument/command injection leading to RCE (notably via the `find_file()` tool). Data-layer issues include **CVE-2025-69662** in *geopandas* (`to_postgis()` SQL injection) and **CVE-2026-24854** in *ChurchCRM* (authenticated SQL injection via `PerID` in `/PaddleNumEditor.php`, patched in 6.7.2), while **CVE-2025-36384** affects *IBM Db2 for Windows* (local privilege escalation via unquoted search path). SOHO router flaws **CVE-2026-1686** (*Totolink A3600R*) and **CVE-2026-1637** (*Tenda AC21*) describe remotely reachable buffer/stack overflows with publicly available exploit material, increasing the likelihood of opportunistic exploitation where exposed management interfaces exist.
1 months ago
Moltbook Data Exposure and Emerging Risk of Viral AI Prompt Worms
Security researchers reported a major data exposure affecting **Moltbook**, an AI-agent-focused social network used by autonomous agents such as **OpenClaw**. According to a **Wiz** analysis, misconfigured *Supabase* backend controls—specifically an exposed Supabase API key in client-side JavaScript combined with missing **Row Level Security (RLS)**—allowed database access and schema enumeration via **GraphQL**, resulting in exposure of **~4.75 million records**. The leaked data reportedly included **~1.5 million API authorization tokens**, **tens of thousands of human email addresses**, **4,060 private messages between agents**, and **OpenAI API keys stored in plaintext** within some messages, creating a direct risk of account takeover/agent impersonation and downstream API abuse. Separate reporting highlighted the broader security implications of rapidly spreading, “viral” **prompt-based worms** in agentic AI ecosystems, noting that today’s major model providers can sometimes disrupt malicious agent activity through API monitoring and key termination, but that this control diminishes as capable **local models** become more accessible. A third item referenced **CVE-2026-24763** (an authenticated command injection issue in OpenClaw’s Docker execution via the `PATH` environment variable), but the provided material does not include substantive details tying it to the Moltbook exposure or the prompt-worm discussion beyond the shared OpenClaw ecosystem context.
1 months ago