Skip to main content
Mallory
Mallory

AI-Driven Threats and Tools in Offensive Security and Malware Evasion

offensive securityNeuroSploitv2threat landscapeAI-drivenexploitsmalwarecyberattacksevasionpenetration testingAIsecurity solutionsdefensive strategiesautomationvulnerabilitycrypters
Updated January 1, 2026 at 02:09 AM4 sources
AI-Driven Threats and Tools in Offensive Security and Malware Evasion

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Threat actors are increasingly leveraging artificial intelligence, particularly large language models (LLMs), to automate and enhance cyberattacks. Recent research demonstrates that LLMs such as GPT-4o and Claude can be manipulated to generate working exploits for enterprise software like Odoo ERP, significantly lowering the barrier for less-skilled attackers to launch sophisticated attacks. Concurrently, the underground market is witnessing the emergence of AI-powered malware tools, such as metamorphic crypters, which use AI to dynamically rewrite malicious code and evade detection by endpoint security solutions like Windows Defender. These developments highlight a rapidly evolving threat landscape where AI is both a tool for attackers and a challenge for defenders.

In response to these threats, the cybersecurity community is developing advanced AI-powered penetration testing frameworks like NeuroSploitv2. This tool integrates multiple LLMs and employs specialized agent roles, grounding techniques, and safety guardrails to automate vulnerability discovery and exploitation in a controlled, ethical manner. Meanwhile, defenders are also exploring granular attribute-based access control and post-quantum encryption to mitigate risks from context window injections in AI systems. The convergence of AI in both offensive and defensive security operations underscores the urgent need for robust safeguards and adaptive security strategies to address the dual-use nature of these technologies.

Related Entities

Organizations

Related Stories

Malicious Use of AI and LLMs for Evasion and C2 in Cyberattacks

Cybercriminals are increasingly leveraging large language models (LLMs) and AI-driven techniques to enhance their attack capabilities and evade detection. Recent research highlights the operationalization of LLM-in-the-loop tradecraft, where malware dynamically generates host-specific PowerShell commands for reconnaissance and data collection, frequently rewriting itself to bypass static and machine learning-based security detections. Attackers are also exploiting stolen API keys and enterprise AI connectors to establish covert command-and-control (C2) channels, disguising malicious activity as legitimate AI traffic. These tactics are being used to target critical infrastructure, with a focus on IT systems that can impact operational technology environments through identity abuse, weak segmentation, and ransomware attacks. In parallel, threat actors are attempting to manipulate AI-based security tools directly. A malicious npm package, `eslint-plugin-unicorn-ts-2`, was discovered embedding a prompt intended to influence the decision-making of AI-driven scanners, while also exfiltrating sensitive environment variables via a post-install script. This approach signals a new trend where attackers not only evade traditional detection but also actively seek to undermine the effectiveness of AI-powered defenses. The emergence of underground markets for malicious LLMs further underscores the growing sophistication and commercialization of AI-enabled cybercrime.

3 months ago
AI Security Risks and Emerging Tooling for Testing LLMs and Agentic Systems

AI Security Risks and Emerging Tooling for Testing LLMs and Agentic Systems

Security reporting and vendor research highlighted accelerating **AI/LLM security exposure** as enterprises deploy generative AI and autonomous agents faster than defensive controls mature. Commonly cited weaknesses included **prompt injection** (reported as succeeding against a majority of tested LLMs), **training-data poisoning**, malicious packages in **model repositories**, and real-world **deepfake-enabled fraud**; one example referenced prior disclosure that a China-linked actor weaponized an autonomous coding/agent tool by breaking malicious objectives into benign-looking subtasks. Separately, commentary on AppSec programs argued that AI-assisted development is amplifying alert volumes and making traditional **SAST triage** increasingly impractical, pushing organizations toward more *runtime* and workflow-embedded testing approaches. New and emerging tooling and practices are being positioned to address these risks, including an open-source scanner (*Augustus*, by Praetorian) that automates **210+ adversarial test techniques** across **28 LLM providers** as a portable Go binary intended for CI/CD and red-team workflows, and discussion of autonomous AI pentesting tools (e.g., *Shannon*) that require sensitive inputs such as source code, repo context, and API keys—raising governance and data-handling concerns even when used defensively. Several other items in the set (phishing/XWorm activity, healthcare extortion group “Insomnia,” Singapore telco intrusions attributed to **UNC3886**, and help-desk payroll fraud) describe unrelated threat activity and do not materially change the AI-security-focused picture.

1 months ago

Emerging AI-Driven Cybersecurity Threats and Exploits

Recent research and threat intelligence highlight the growing risks posed by advanced AI models in the cybersecurity landscape. Studies demonstrate that state-of-the-art AI agents, such as Claude Opus 4.5 and GPT-5, are now capable of autonomously exploiting smart contracts, uncovering zero-day vulnerabilities, and generating real-world economic harm. OpenAI has publicly acknowledged the dual-use nature of its models, warning that future iterations may reach 'high' cybersecurity risk levels, with the potential to develop working zero-day exploits and assist in complex intrusion operations. These developments underscore the urgent need for proactive defensive measures and the adoption of AI for security as well as offense. In parallel, threat actors are leveraging AI to orchestrate sophisticated supply chain attacks, as seen in the PyStoreRAT campaign, which used AI-generated GitHub projects to target IT and OSINT professionals with stealthy malware. Security experts and industry leaders are raising concerns about the expanding attack surface, including the exploitation of antiquated systems and shadow APIs by agentic AI, and the challenges of integrating AI into operational technology environments. The convergence of AI capabilities with cyber offense and defense is rapidly reshaping the threat landscape, demanding new strategies for risk management, governance, and technical controls.

3 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.