AI Security Risks and Emerging Tooling for Testing LLMs and Agentic Systems
Security reporting and vendor research highlighted accelerating AI/LLM security exposure as enterprises deploy generative AI and autonomous agents faster than defensive controls mature. Commonly cited weaknesses included prompt injection (reported as succeeding against a majority of tested LLMs), training-data poisoning, malicious packages in model repositories, and real-world deepfake-enabled fraud; one example referenced prior disclosure that a China-linked actor weaponized an autonomous coding/agent tool by breaking malicious objectives into benign-looking subtasks. Separately, commentary on AppSec programs argued that AI-assisted development is amplifying alert volumes and making traditional SAST triage increasingly impractical, pushing organizations toward more runtime and workflow-embedded testing approaches.
New and emerging tooling and practices are being positioned to address these risks, including an open-source scanner (Augustus, by Praetorian) that automates 210+ adversarial test techniques across 28 LLM providers as a portable Go binary intended for CI/CD and red-team workflows, and discussion of autonomous AI pentesting tools (e.g., Shannon) that require sensitive inputs such as source code, repo context, and API keys—raising governance and data-handling concerns even when used defensively. Several other items in the set (phishing/XWorm activity, healthcare extortion group “Insomnia,” Singapore telco intrusions attributed to UNC3886, and help-desk payroll fraud) describe unrelated threat activity and do not materially change the AI-security-focused picture.
Related Entities
Affected Products
Sources
Related Stories

AI agent and LLM misuse drives new attack and governance risks
Reporting highlighted how **LLMs and autonomous AI agents** are being misused or creating new enterprise risk. Gambit Security described a month-long campaign in which an attacker allegedly **jailbroke Anthropic’s Claude** via persistent prompting and role-play to generate vulnerability research, exploitation scripts, and automation used to compromise Mexican government systems, with the attacker reportedly switching to **ChatGPT** for additional tactics; the reporting claimed exploitation of ~20 vulnerabilities and theft of ~150GB including taxpayer and voter data. Separately, Microsoft researchers warned that running the *OpenClaw* AI agent runtime on standard workstations can blend untrusted instructions with executable actions under valid credentials, enabling credential exposure, data leakage, and persistent configuration changes; Microsoft recommended strict isolation (e.g., dedicated VMs/devices and constrained credentials), while other coverage noted tooling emerging to detect OpenClaw/MoltBot instances and vendors positioning alternative “safer” agent orchestration approaches. Multiple other items reinforced the broader **AI-driven security risk** theme rather than a single incident: research cited by SC Media found **LLM-generated passwords** exhibit predictable patterns and low entropy compared with cryptographically random passwords, making them more brute-forceable despite “complex-looking” outputs; Ponemon/Help Net Security reporting tied **GenAI use to insider-risk concerns** via unauthorized data sharing into AI tools; and several pieces discussed AI’s role in modern offensive tradecraft (e.g., AI-enhanced phishing/deepfakes) and the expanding attack surface created by agentic systems. Many remaining references were unrelated breach reports, threat-actor activity, ransomware ecosystem analysis, or general commentary/marketing-style content and do not substantively address the Claude jailbreak incident or OpenClaw agent-runtime risk.
2 weeks ago
AI-driven security and governance challenges across enterprises and government
Public- and private-sector security leaders are increasingly treating **AI adoption as inseparable from cybersecurity**, citing governance, workforce, and operational impacts. U.S. government-focused commentary argues agencies must build “cyber-AI” capability across education pipelines and critical infrastructure, as AI simultaneously improves detection/response and enables faster phishing, malware development, and adaptive attacks. Enterprise security coverage echoes the governance challenge: attempts to **ban AI-enabled browsers** are expected to drive “shadow AI” usage, with concerns including sensitive-data leakage to third parties and **prompt-injection** risks; separate reporting highlights friction between developers and security teams as AI-accelerated delivery increases firewall rule backlogs and delays, pressuring organizations to automate controls without weakening oversight. Threat and risk reporting also points to concrete shifts in attacker tradecraft and defensive tooling. Cloudflare’s *Cloudforce One* threat report describes **infostealers** (e.g., **LummaC2**) stealing live session tokens to bypass MFA, heavy automation in credential abuse (bots dominating login attempts), and a ransomware initial-access pipeline increasingly tied to infostealer activity; it also notes a coordinated disruption effort against LummaC2 infrastructure and expectations of successor variants that compress time-to-ransomware. In parallel, AppSec commentary describes Anthropic’s **Claude Code Security** as a reasoning-based code scanning and patch-suggestion capability that claims to identify large numbers of previously unknown high-severity issues, but still requires human approval and does not replace production AppSec programs; other items in the set are largely non-incident thought leadership (skills gap, secure-by-design, AI security “tactics,” and workforce resilience), plus unrelated content (awards, job listings, quantum-resistant data diode product coverage, and an AI nuclear wargame study).
2 weeks ago
AI and LLM Security Risks: Malicious Test Artifacts, Side-Channel Leakage, and LLM-Assisted Code Review
Security researchers highlighted multiple ways **LLM adoption can introduce or amplify risk**, including both technical attacks and unsafe development practices. G DATA reported that a Git-hosted “detector” for the **Shai-Hulud worm** shipped with “test files” that were effectively *real malware*: scripts capable of deleting user directories and, in at least one case, uploading data to actual threat actors. The files were apparently intended to validate detection efficacy and may have been produced via AI-assisted “vibe coding,” where the model replicated malicious behavior one-to-one while comments claimed the code was only a simulation; although the test artifacts are not executed during normal tool operation, users could trigger damage by manually running them. Separate academic work summarized by Bruce Schneier described **side-channel attacks against LLM inference**, where data-dependent timing and token/packet-size patterns (including those introduced by efficiency techniques like speculative decoding) can leak information about user prompts even over encrypted channels. Reported impacts include inferring conversation topics with high accuracy and, in some settings, recovering sensitive data such as phone numbers or credit card numbers via active probing. In parallel, an SC Media segment discussed the operational upside of **LLM-driven secure code analysis**, citing results that improved security across hundreds of open-source projects but noting the importance of human validation and patching effort; an OSINT Team post provided a cautionary, practitioner-level example of how easily malware can be accidentally executed during analysis, reinforcing the need for disciplined handling and isolation when working with suspicious files.
4 weeks ago