Skip to main content
Mallory
Mallory

AI-Enabled Cyberattacks Outpacing Defensive Response

automated exploitationcredential theftvulnerabilitysocial engineering
Updated March 16, 2026 at 11:04 PM4 sources
AI-Enabled Cyberattacks Outpacing Defensive Response

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

A Booz Allen Hamilton report warned that attackers are adopting AI faster than governments and enterprises are deploying it for defense, compressing response windows and enabling intrusion activity to proceed at machine speed. The report cited examples of AI-assisted operations, including use of large language models to identify weak perimeter exposures and rapidly establish persistence, and highlighted how current defensive processes such as patching against newly listed KEV vulnerabilities can be too slow against automated exploitation. One example described HexStrike exploiting thousands of Citrix NetScaler systems in under 10 minutes using a single critical CVE, underscoring the scale and tempo AI can bring to offensive operations.

Broader reporting in the same period reinforced that AI is materially changing cyber risk rather than remaining a theoretical concern. Commentary on production engineering failures described internal concern over the blast radius of GenAI-assisted changes, including Amazon reportedly requiring senior approval for AI-assisted code changes after a major outage tied in part to such activity. At the same time, platform security operations showed AI being used defensively at scale, with Meta using AI to detect coded cartel language and drug imagery across Facebook and Instagram, while threat research documented increasingly adaptive social engineering campaigns that blend trusted platforms, brand impersonation, and real-time interaction to steal credentials, payment data, MFA codes, and other PII. Together, the reporting indicates AI is accelerating both attacker capability and defender automation, but offensive use is currently moving faster than most enterprise response models.

Sources

Related Stories

AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure

AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure

Enterprises’ rapid deployment of **AI and agentic AI** is increasingly creating measurable security and business risk, including direct exposure of sensitive personal data and downstream impacts on risk transfer. A widely cited example involved McDonald’s *McHire* applicant-screening platform (built by *Paradox.ai*), where researchers reported a trivial backend credential weakness (`123456` as both username and password) and no MFA, potentially exposing data tied to roughly **64 million** applicants; the incident is being used by insurers and risk teams as evidence that AI adoption is moving faster than security and governance, contributing to tighter cyber-insurance language, higher premiums, and **AI-related exclusions**. Separate reporting also highlighted that “plug-and-play” AI is unrealistic at enterprise scale, with organizations increasingly needing custom integration and operational ownership rather than relying on off-the-shelf tools. Threat reporting during the same period reinforced that AI is expanding both attacker capability and the attack surface: researchers described **Pakistan-linked APT36** using AI coding tools to generate high volumes of low-quality malware variants (including in less common languages) and to leverage legitimate cloud services for command-and-control, complicating detection. Additional research flagged **AI-themed browser extensions** (Chrome/Edge) that impersonate legitimate tools and can harvest LLM chat histories and browsing activity, underscoring the risk of “shadow AI” and unvetted add-ons. In parallel, routine threat-intelligence summaries continued to track major incidents (e.g., ransomware and data breaches) alongside AI-enabled tactics, indicating that AI risk is becoming intertwined with broader enterprise security exposure rather than remaining a standalone technology concern.

5 days ago
AI’s Impact on Secure Coding, Security Operations, and Workforce Strain

AI’s Impact on Secure Coding, Security Operations, and Workforce Strain

Security leaders and practitioners are increasingly framing **AI** as both a force-multiplier for defenders and a risk amplifier for software and operations. Commentary and executive guidance highlighted that AI-assisted fuzzing, static analysis, and large-scale pattern recognition can surface vulnerabilities faster than traditional review, but that faster discovery does not automatically reduce enterprise risk because real-world impact depends on exposure, identity/privilege design, data flows, and business process dependencies. Separately, industry guidance on “rolling out AI” emphasized practical governance measures—knowledge-sharing, partnering, and automation—arguing that the same capabilities that make AI valuable also expand the attack surface and the speed at which threats evolve. Operational reporting also underscored how AI-related and traditional threats are converging in day-to-day security work. A monthly security briefing cited rapid weaponization of a critical BeyondTrust Remote Support pre-auth RCE (**CVE-2026-1731**) with proof-of-concept and exploitation observed shortly after disclosure, later treated as a zero-day and reportedly used in ransomware activity; it also noted emerging integrity risks such as **AI recommendation poisoning** (manipulating AI-generated outputs via hidden instructions) and an AI tooling supply-chain incident involving an unintended update to the *Cline CLI* coding assistant after a compromised token. In parallel, survey results pointed to sustained **workforce burnout**—U.S. security professionals averaging significant weekly overtime and reporting emotional exhaustion—while also indicating a skills shift toward communication and stakeholder management as AI tooling adoption increases cross-functional demands.

1 weeks ago
AI Agents Increasingly Assist Cyberattacks, but Fully Autonomous Operations Remain Limited

AI Agents Increasingly Assist Cyberattacks, but Fully Autonomous Operations Remain Limited

An expert-authored **International AI Safety** report says AI agents are increasingly being used to support multiple stages of cyberattacks, with notable gains over the past year in vulnerability discovery and malicious code generation. The report cites results from DARPA’s AI Cyber Challenge where finalist systems autonomously identified **77% of synthetic vulnerabilities**, and notes criminal use of AI tooling (e.g., *HexStrike AI*) to accelerate exploitation soon after public vulnerability disclosures; it also describes a growing market for “weaponized” models that can generate ransomware and data-stealing code at low monthly cost. Despite these advances, the report assesses that **fully autonomous, end-to-end, multi-stage attacks** are not yet commonly observed because current AI systems struggle to reliably execute long, complex sequences without human oversight, including poor error recovery and irrelevant command execution. Separately, CSO Online highlights risk-management concerns that large numbers of deployed **AI agents** could “go rogue,” underscoring governance and control challenges as organizations operationalize agentic AI at scale.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.