Anthropic Expands Claude’s Agentic Coding Capabilities and Adds Embedded Vulnerability Scanning
Anthropic announced Claude Code Security, an embedded capability in Claude Code that scans customer codebases for vulnerabilities and suggests patches, initially rolling out to a limited set of enterprise/team customers for testing. The company said the feature was stress-tested via internal red-teaming, Capture-the-Flag exercises, and collaboration with Pacific Northwest National Laboratory, and positioned it as a way to reduce reliance on manual security reviews as AI-assisted “vibe coding” increases and attackers also use AI to accelerate weakness discovery.
In parallel, Anthropic released Claude Sonnet 4.6, emphasizing improved coding performance, stronger “computer use” capabilities, and expanded developer tooling (e.g., adaptive/extended thinking modes, beta context compaction, and API tools for web search/fetch and code execution). Separate commentary highlighted the security risk of agentic coding assistants (e.g., Claude Code, Cursor, GitHub Copilot) operating with broad privileges—file access, shell execution, and secret handling—and argued that the emerging Model Context Protocol (MCP) ecosystem needs stronger, future-proof identity controls; additional industry guidance promoted MLSecOps as a way to integrate security into AI/ML development lifecycles, though it did not report a specific incident or vulnerability.
Related Entities
Organizations
Affected Products
Sources
4 more from sources like resilient cyber blog, cyberscoop, securely built substack and help net security
Related Stories

Anthropic Expands Claude With Enterprise Plugins and Integrated Security Capabilities
Anthropic rolled out expanded *Claude Cowork* capabilities, adding **enterprise workflow plugins** intended to push agentic AI beyond software development into functions such as marketing, HR, legal, and finance, positioning Claude as a broader automation layer inside organizations. Coverage characterized the move as part of a wider shift toward AI-driven workflows in the enterprise, with implications for CIO governance, adoption patterns, and how teams operationalize AI outside engineering. Separately but related to the same product-direction narrative, commentary highlighted Anthropic formalizing **security-oriented features inside Claude**—including a prominent “**suggest fix**” capability aimed at moving from vulnerability detection to automated or semi-automated remediation—prompting market speculation about pressure on certain security-tool segments (particularly code-vulnerability discovery and remediation tooling). Other items in the set were not incident- or vulnerability-driven: one was generic SAST remediation guidance, and several were general-interest or business-trend pieces (e.g., an AI model blogging, AI-native software market dynamics, and an AI-agent governance/AX article) without specific, actionable cybersecurity event details.
2 weeks ago
Vulnerabilities in Anthropic Claude Code Enable Code Execution and API Key Exfiltration
Security researchers disclosed multiple vulnerabilities in **Anthropic’s Claude Code** AI coding assistant that could enable **arbitrary command execution** and **exfiltration of Anthropic API credentials** when developers clone/open a malicious repository. Check Point Research reported the issues abuse Claude Code configuration and initialization paths—particularly **project hooks** (e.g., untrusted `.claude/settings.json`), **Model Context Protocol (MCP) servers**, and **environment variables**—to trigger shell command execution and data theft. Anthropic’s advisory for **CVE-2026-21852** describes a project-load flow where a crafted repo can set `ANTHROPIC_BASE_URL` to an attacker-controlled endpoint, causing Claude Code to send API requests **before** the trust prompt is shown, potentially leaking the user’s API key. The disclosed issues include two high-severity code-injection paths (CVSS **8.7**) and one information-disclosure flaw (CVSS **5.3**): a consent-bypass/hook-based injection issue fixed in *Claude Code* **1.0.87** (Sept 2025), **CVE-2025-59536** fixed in **1.0.111** (Oct 2025), and **CVE-2026-21852** fixed in **2.0.65** (Jan 2026). Separate coverage framed Anthropic-related developments as market-moving, noting investor attention around Anthropic’s AI code-security tooling; however, the actionable security impact in this reporting is the risk that simply opening an attacker-controlled repository can lead to **RCE** and **credential leakage**, reinforcing the need to treat untrusted repos and tool initialization behaviors as a supply-chain and developer-workstation risk.
2 weeks ago
Anthropic Claude Code Security and AI-Assisted Bug Discovery
Anthropic’s **Claude Code Security** was introduced as an AI-driven capability within *Claude Code* that scans source code for vulnerabilities and proposes patches for human review, positioning itself as more adaptive than traditional rules-based static analysis. Coverage noted that early investor reaction briefly pressured major security vendors’ valuations, but analysts assessed the longer-term market impact as likely to be more nuanced given the feature’s early-preview status and its role as an add-on within a broader coding assistant/agent rather than a standalone security product. Separately, Mozilla engineers reported using **Claude** to help identify a “slew” of new Firefox issues, while also highlighting that a meaningful share of observed Firefox crashes may not be software defects at all but *hardware-induced memory errors* (“bit flips”). Mozilla cited roughly **470,000** weekly crash reports (from opted-in users), with about **25,000** flagged as potential bit flips (and possibly higher due to conservative heuristics), underscoring that AI-assisted bug-finding can improve software quality but may not address instability rooted in faulty or error-prone hardware (including potential causes like **Rowhammer** or defective components).
1 weeks ago