Skip to main content
Mallory
Mallory

AI-driven shifts in cybersecurity: agentic AI risks, AI-assisted offensive tradecraft, and evolving cybercriminal ecosystems

agentic aicybercrimeai agentscrypto fraudprivilege escalationexploitabilityransomwaresilent ransom grouphuman oversightpenetration testing
Updated February 4, 2026 at 06:02 PM4 sources
AI-driven shifts in cybersecurity: agentic AI risks, AI-assisted offensive tradecraft, and evolving cybercriminal ecosystems

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Security reporting and research highlighted how AI and automation are reshaping both attacker tradecraft and defender operations, while introducing new enterprise risk. ZDNET described research findings that agentic AI implementations from ServiceNow and Microsoft can be exploitable, warning that broadly permissioned agents could enable lateral movement and privilege escalation across systems of record if an attacker compromises an agent or chains between agents with different access levels; a least-privilege posture for agents was emphasized. Dark Reading separately reported that AI agents are increasingly augmenting—and in some cases supplanting—human penetration testing for “low-hanging” vulnerabilities, but that false positives and the need for human oversight remain material constraints as agentic testing matures.

Threat-intelligence coverage also underscored the industrialization of cybercrime and the ecosystems enabling it. CloudSEK detailed the evolution of the English-speaking cybercriminal milieu known as “The COM,” tracing its roots in OG-handle trading communities and forum migrations into a service-oriented underground linked to groups such as Lapsus$, ShinyHunters, Scattered Spider (UNC3944), and Silent Ransom Group, and associated activity spanning breaches, extortion, SIM swapping, ransomware, and crypto fraud. SC Media’s commentary similarly described a cyber underground where criminals can readily buy capabilities (credentials, tooling, automation), calling out techniques including carding and ClickFix social engineering that tricks users into running copied commands to install infostealers. Separately, Dark Reading reported allegations that the Chronus Group posted 2.3TB of purported Mexican government data affecting up to 36 million people, while Mexico’s ATDT disputed it as largely repackaged data from prior breaches and said no new sensitive accounts were identified and that impacted systems were primarily obsolete, third-party-administered state-level platforms.

Related Stories

AI-Enabled Threats and Security Failures Across Edge Devices, AI Agents, and Infostealer Campaigns

AI-Enabled Threats and Security Failures Across Edge Devices, AI Agents, and Infostealer Campaigns

Threat actors are increasingly operationalizing AI and automation to scale attacks and exploit weak controls across both enterprise and consumer environments. An open-source offensive platform dubbed **CyberStrikeAI**—a Go-based “AI-native security testing” framework integrating 100+ tools—was observed in infrastructure used to target **Fortinet FortiGate** edge devices at scale; researchers linked activity to an IP (212.11.64.250) exposing a `CyberStrikeAI` banner and to scanning/communications patterns consistent with mass exploitation. Separately, a newly disclosed and rapidly patched **OpenClaw** vulnerability showed how AI agent tooling can be hijacked: researchers reported that a malicious website could take over a developer’s locally running agent due to inadequate trust-boundary validation, prompting urgent upgrades to **OpenClaw v2026.2.25+**. In parallel, a “vibe-coding” hosted app on the *Lovable* platform leaked data impacting **18,000+ users** after a researcher found **16 flaws (six critical)** tied to mis-implemented backend controls (including missing/incorrect row-level security in *Supabase*), enabling unauthorized access to records and actions like bulk email and account deletion. Criminal monetization also continues to evolve beyond AI-agent risks. **AuraStealer**, a Russian-language infostealer positioned as a successor/competitor after Lumma disruptions, was advertised on multiple underground forums and is supported by a sizable C2 footprint; analysis of 200+ samples identified **48 C2 domains**, with operators abusing low-cost TLDs (e.g., `.shop`, `.cfd`) and using **Cloudflare** as a reverse proxy to mask origin infrastructure. Broader reporting and commentary reinforced that identity and access failures remain a dominant breach driver and that AI adoption is expanding the attack surface via over-privileged agents and “shadow AI,” while ransomware operators increasingly target recovery paths (including backups) and dwell to corrupt restore points. Several items in the set were non-incident thought leadership or workforce content (skills gap, jobs listings, awards, and general AI security tips) and did not add event-specific technical details beyond high-level risk framing.

2 weeks ago
AI Security Governance and Emerging AI-Enabled Threats in Enterprise Environments

AI Security Governance and Emerging AI-Enabled Threats in Enterprise Environments

Security and media reporting highlighted growing enterprise exposure created by **AI agents** and the expanding ecosystem around the *Model Context Protocol (MCP)*. AWS detailed new IAM governance controls for AWS-managed remote MCP servers, introducing standardized context keys `aws:ViaAWSMCPService` and `aws:CalledViaAWSMCP` to differentiate agent-initiated API calls from human activity and enable tighter policy enforcement, with additional network perimeter controls (VPC endpoint support) planned. Separately, AI governance startup **JetStream** announced a $34M seed round to provide visibility and control over AI behavior in production, explicitly targeting MCP server/key sprawl and cost/accountability concerns; multiple commentaries also warned that AI-driven development and “AI ultimatums” can increase **IP theft** and governance risk if organizations lack clear controls and monitoring. Threat-focused coverage underscored that AI is also accelerating offensive capability and complicating defense. CSO Online reported **AI-powered attack kits** moving into open source (including tooling referenced as *CyberStrikeAI*), lowering barriers for cybercrime and enabling faster iteration of malicious tradecraft. In parallel, FBI messaging emphasized that **Salt Typhoon** activity remains ongoing following prior compromises of sensitive US telecom infrastructure, reinforcing the need for stronger government–telecom partnerships and improved readiness against Chinese cyber operations (including the FBI’s *Operation Winter SHIELD* focus on preparedness and faster intel sharing). Additional technical threat-hunting research described operationalizing **Cobalt Strike** C2 feeds via API automation for SIEM/EDR use, noting continued rapid infrastructure rotation and increased association with state-backed espionage and advanced ransomware operations, while a Dark Reading podcast recapped Interpol-supported law-enforcement disruption of an African cybercrime syndicate (hundreds of arrests and multiple malware decryptions).

1 weeks ago
AI Adoption and Misuse Expands Enterprise and Cybercrime Risk

AI Adoption and Misuse Expands Enterprise and Cybercrime Risk

No single incident ties the reporting together; the dominant theme is **AI’s expanding role in both enterprise operations and criminal tradecraft**, alongside broader, non-AI security trend commentary. A Docker-sponsored survey reported by *Help Net Security* says **60% of organizations run AI agents in production**, but **security/compliance is the top scaling barrier (40%)**, with recurring concerns including *prompt injection*, *tool poisoning*, runtime isolation/sandboxing, auditability, and credential/access control in distributed agent systems. Separately, forum-traffic research summarized by *Help Net Security* found cybercriminals increasingly using mainstream and local AI models to support phishing, code generation, and social engineering, with frequent discussion of jailbreaking and the use of stolen/resold premium AI accounts. Several other items are adjacent but not about the same specific story: an ESET article provides **generic guidance** on detecting **AI voice deepfakes** used for fraud; an Ars Technica piece covers **copyright/data memorization** risks in LLMs; and multiple outlets publish broader security trend or opinion content (quantum preparedness, ransomware targeting manufacturing, Romanian warnings about ransomware aligning with Russian hybrid aims, ATM jackpotting increases, and a Check Point retrospective). Some entries are primarily **commentary, historical analogy, newsletters, or how-to recon guidance** rather than new threat reporting, and should be treated as lower-signal for executive situational awareness unless your organization is actively deploying agentic AI or tracking AI-enabled fraud/social engineering.

3 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.