Skip to main content
Mallory
Mallory

Critical Vulnerabilities in Anthropic Claude Code Enable RCE and API Key Theft via Malicious Repositories

malicious repositorymalicious commitsremote code executionconfiguration abusevulnerabilitycredential theftapi keysclaude coderepository configurationanthropicmcp serversai coding assistantversion updatepatchingshell commands
Updated February 27, 2026 at 08:00 AM7 sources
Critical Vulnerabilities in Anthropic Claude Code Enable RCE and API Key Theft via Malicious Repositories

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Check Point Research disclosed multiple critical vulnerabilities in Anthropic’s Claude Code AI coding assistant that could allow remote code execution and credential theft when a developer clones and opens an untrusted repository. The reported attack path abuses repository-controlled configuration and automation features (including Hooks, MCP servers, and environment variables) to trigger hidden shell command execution and to exfiltrate Anthropic API credentials, potentially enabling a pivot from a developer workstation into broader enterprise environments where Claude-related workflows and shared resources are accessible.

The issues include consent-bypass and command-execution weaknesses tracked under CVE-2025-59536 (covering closely related flaws involving repository configuration executing commands without adequate user consent) and an API credential exposure issue tracked as CVE-2026-21852, which affected Claude Code versions prior to 2.0.65 and enabled API key theft via malicious project configurations. Anthropic has patched the vulnerabilities and advised users to update to the latest version, while indicating additional hardening measures are planned to reduce supply-chain risk from malicious commits and repository-level configuration abuse.

Sources

February 26, 2026 at 12:00 AM

2 more from sources like security affairs and dark reading

Related Stories

Vulnerabilities in Anthropic Claude Code Enable Code Execution and API Key Exfiltration

Vulnerabilities in Anthropic Claude Code Enable Code Execution and API Key Exfiltration

Security researchers disclosed multiple vulnerabilities in **Anthropic’s Claude Code** AI coding assistant that could enable **arbitrary command execution** and **exfiltration of Anthropic API credentials** when developers clone/open a malicious repository. Check Point Research reported the issues abuse Claude Code configuration and initialization paths—particularly **project hooks** (e.g., untrusted `.claude/settings.json`), **Model Context Protocol (MCP) servers**, and **environment variables**—to trigger shell command execution and data theft. Anthropic’s advisory for **CVE-2026-21852** describes a project-load flow where a crafted repo can set `ANTHROPIC_BASE_URL` to an attacker-controlled endpoint, causing Claude Code to send API requests **before** the trust prompt is shown, potentially leaking the user’s API key. The disclosed issues include two high-severity code-injection paths (CVSS **8.7**) and one information-disclosure flaw (CVSS **5.3**): a consent-bypass/hook-based injection issue fixed in *Claude Code* **1.0.87** (Sept 2025), **CVE-2025-59536** fixed in **1.0.111** (Oct 2025), and **CVE-2026-21852** fixed in **2.0.65** (Jan 2026). Separate coverage framed Anthropic-related developments as market-moving, noting investor attention around Anthropic’s AI code-security tooling; however, the actionable security impact in this reporting is the risk that simply opening an attacker-controlled repository can lead to **RCE** and **credential leakage**, reinforcing the need to treat untrusted repos and tool initialization behaviors as a supply-chain and developer-workstation risk.

2 weeks ago
Malicious and unsafe use of Anthropic Claude Code leading to malware delivery and destructive infrastructure changes

Malicious and unsafe use of Anthropic Claude Code leading to malware delivery and destructive infrastructure changes

Push Security reported an **“InstallFix” malvertising campaign** targeting developers searching for Anthropic’s *Claude Code* CLI. Attackers clone the legitimate installation page on lookalike domains and buy **Google Search ads** so the fake pages rank highly for queries like “install Claude Code” and “Claude Code CLI.” While links on the page route to Anthropic’s real site, the **copy‑paste install one‑liners** are replaced with malicious commands that fetch malware from attacker-controlled infrastructure; the Windows flow was observed delivering the **Amatera Stealer**, with macOS users likely targeted by similar info-stealing malware. Separately, a reported operational incident highlighted the risk of delegating privileged infrastructure actions to AI agents without strong guardrails: a developer described using *Claude Code* to run **Terraform** changes during an AWS migration and, after a missing Terraform state file led to duplicate resources, subsequent cleanup actions resulted in the **deletion of production components**, including a database and recovery snapshots—wiping roughly **2.5 years of records**. Together, the reports underscore two distinct but compounding risks around AI coding agents: **supply-chain style social engineering** via fake install instructions and **high-impact misexecution** when AI-driven automation is allowed to operate with destructive permissions in production environments.

6 days ago
Anthropic Expands Claude’s Agentic Coding Capabilities and Adds Embedded Vulnerability Scanning

Anthropic Expands Claude’s Agentic Coding Capabilities and Adds Embedded Vulnerability Scanning

Anthropic announced **Claude Code Security**, an embedded capability in *Claude Code* that scans customer codebases for vulnerabilities and suggests patches, initially rolling out to a limited set of enterprise/team customers for testing. The company said the feature was stress-tested via internal red-teaming, Capture-the-Flag exercises, and collaboration with **Pacific Northwest National Laboratory**, and positioned it as a way to reduce reliance on manual security reviews as AI-assisted “vibe coding” increases and attackers also use AI to accelerate weakness discovery. In parallel, Anthropic released **Claude Sonnet 4.6**, emphasizing improved coding performance, stronger “computer use” capabilities, and expanded developer tooling (e.g., adaptive/extended thinking modes, beta context compaction, and API tools for web search/fetch and code execution). Separate commentary highlighted the security risk of **agentic coding assistants** (e.g., *Claude Code*, *Cursor*, *GitHub Copilot*) operating with broad privileges—file access, shell execution, and secret handling—and argued that the emerging **Model Context Protocol (MCP)** ecosystem needs stronger, future-proof identity controls; additional industry guidance promoted **MLSecOps** as a way to integrate security into AI/ML development lifecycles, though it did not report a specific incident or vulnerability.

2 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.