Malvertising and Supply-Chain Lures Impersonate AI Developer Tools to Deliver Infostealers and RATs
Threat actors are abusing interest in AI developer tools by impersonating installers and setup guides to trick users into executing malware. Fake installation-guide pages for Anthropic’s Claude Code were promoted via Google Ads to rank highly for searches like “Claude Code install/CLI,” leading Windows and macOS users to run copy-pasted commands in an InstallFix campaign (a variant of ClickFix) that ultimately deployed Amatera (an ACR Stealer-based MaaS infostealer). Push Security reported the malware steals browser-stored credentials, cookies, session tokens, and system information, and the infrastructure used legitimate hosting/CDN services (e.g., Squarespace, Cloudflare Pages, Tencent EdgeOne) to reduce suspicion.
In a related AI-tool impersonation theme, JFrog identified a malicious npm package, @openclaw-ai/openclawai, posing as an OpenClaw installer that targets macOS users to steal credentials and establish persistent remote access. The package uses a postinstall hook to reinstall itself globally and registers a CLI via the bin field pointing to scripts/setup.js, which presents a fake installer UI and then prompts for the user’s system password via a bogus Keychain/iCloud authorization flow. The malware (self-identified as GhostLoader) was reported to collect browser data, crypto wallets, SSH keys, Apple Keychain databases, and iMessage history, while also deploying a RAT with SOCKS5 proxy capability and “live browser session cloning,” indicating a blend of credential theft and long-term access objectives.
Related Entities
Affected Products
Sources
Related Stories

Malicious and unsafe use of Anthropic Claude Code leading to malware delivery and destructive infrastructure changes
Push Security reported an **“InstallFix” malvertising campaign** targeting developers searching for Anthropic’s *Claude Code* CLI. Attackers clone the legitimate installation page on lookalike domains and buy **Google Search ads** so the fake pages rank highly for queries like “install Claude Code” and “Claude Code CLI.” While links on the page route to Anthropic’s real site, the **copy‑paste install one‑liners** are replaced with malicious commands that fetch malware from attacker-controlled infrastructure; the Windows flow was observed delivering the **Amatera Stealer**, with macOS users likely targeted by similar info-stealing malware. Separately, a reported operational incident highlighted the risk of delegating privileged infrastructure actions to AI agents without strong guardrails: a developer described using *Claude Code* to run **Terraform** changes during an AWS migration and, after a missing Terraform state file led to duplicate resources, subsequent cleanup actions resulted in the **deletion of production components**, including a database and recovery snapshots—wiping roughly **2.5 years of records**. Together, the reports underscore two distinct but compounding risks around AI coding agents: **supply-chain style social engineering** via fake install instructions and **high-impact misexecution** when AI-driven automation is allowed to operate with destructive permissions in production environments.
6 days ago
Malware campaigns abuse developer ecosystems via malicious npm packages and GitHub repositories
Security researchers reported multiple **software supply chain-style malware distribution** efforts abusing developer-adjacent platforms. JFrog detailed a malicious npm package, `@openclaw-ai/openclawai`, masquerading as an *OpenClaw* CLI installer; once executed, it uses a `postinstall` hook to reinstall globally and drop an obfuscated first-stage (`setup.js`) that deploys a multi-stage payload internally identified as **GhostLoader** (campaign tracked as **GhostClaw**). The malware is designed to persist and exfiltrate a broad set of sensitive data from developer workstations, including credentials (e.g., cloud config artifacts for **AWS/GCP/Azure**), macOS Keychain data, browser sessions, SSH keys, and cryptocurrency wallet/seed material. Separately, Trend Micro reported a large-scale distribution operation for the **BoryptGrab** information stealer via **100+ public GitHub repositories** that pose as legitimate tools and game cheats. The campaign uses SEO manipulation (keyword-stuffed READMEs and lookalike download pages) to drive victims from search results into redirect chains that ultimately deliver ZIP archives containing the stealer; some variants also deploy a PyInstaller backdoor (**TunnesshClient**) that establishes a reverse SSH tunnel for attacker communications. Reported indicators (e.g., Russian-language comments and related infrastructure) suggest a possible Russian nexus, and the observed targeting focuses on harvesting browser data, crypto wallets, system information, and user files.
1 weeks ago
AI and Open-Source Ecosystem Abused for Malware Delivery and Agent Manipulation
Multiple reports describe threat actors abusing *AI-adjacent* and open-source distribution channels to deliver malware or manipulate automated agents. Straiker STAR Labs reported a **SmartLoader** campaign that trojanized a legitimate-looking **Model Context Protocol (MCP)** server tied to *Oura* by cloning the project, fabricating GitHub credibility (fake forks/contributors), and getting the poisoned server listed in MCP registries; the payload ultimately deployed **StealC** to steal credentials and crypto-wallet data. Separately, researchers observed attackers using trusted platforms and SaaS reputations for delivery and monetization: a fake Android “antivirus” (*TrustBastion*) was hosted via **Hugging Face** repositories to distribute banking/credential-stealing malware, and Trend Micro documented spam/phishing that abused **Atlassian Jira Cloud** email reputation and **Keitaro TDS** redirects to funnel targets (including government/corporate users across multiple language groups) into investment scams and online casinos. In parallel, research highlights emerging risks where **AI agents and AI-enabled workflows become the target or the transport layer**. Check Point demonstrated “**AI as a proxy**,” where web-enabled assistants (e.g., *Grok*, *Microsoft Copilot*) can be coerced into acting as covert **C2 relays**, blending attacker traffic into commonly allowed enterprise destinations, and outlined a trajectory toward prompt-driven, adaptive malware behavior. OpenClaw featured in two distinct security developments: an OpenClaw advisory described a **log-poisoning / indirect prompt-injection** weakness (unsanitized WebSocket headers written to logs that may later be ingested as trusted context), while Hudson Rock reported an infostealer incident that exfiltrated sensitive **OpenClaw configuration artifacts** (e.g., `openclaw.json` tokens, `device.json` keys, and “memory/soul” files), signaling that infostealer operators are beginning to harvest AI-agent identities and automation secrets in addition to browser credentials.
4 weeks ago