Malicious Use of AI and LLMs for Evasion and C2 in Cyberattacks
Cybercriminals are increasingly leveraging large language models (LLMs) and AI-driven techniques to enhance their attack capabilities and evade detection. Recent research highlights the operationalization of LLM-in-the-loop tradecraft, where malware dynamically generates host-specific PowerShell commands for reconnaissance and data collection, frequently rewriting itself to bypass static and machine learning-based security detections. Attackers are also exploiting stolen API keys and enterprise AI connectors to establish covert command-and-control (C2) channels, disguising malicious activity as legitimate AI traffic. These tactics are being used to target critical infrastructure, with a focus on IT systems that can impact operational technology environments through identity abuse, weak segmentation, and ransomware attacks.
In parallel, threat actors are attempting to manipulate AI-based security tools directly. A malicious npm package, eslint-plugin-unicorn-ts-2, was discovered embedding a prompt intended to influence the decision-making of AI-driven scanners, while also exfiltrating sensitive environment variables via a post-install script. This approach signals a new trend where attackers not only evade traditional detection but also actively seek to undermine the effectiveness of AI-powered defenses. The emergence of underground markets for malicious LLMs further underscores the growing sophistication and commercialization of AI-enabled cybercrime.
Related Entities
Malware
Sources
Related Stories

AI-Driven Threats and Tools in Offensive Security and Malware Evasion
Threat actors are increasingly leveraging artificial intelligence, particularly large language models (LLMs), to automate and enhance cyberattacks. Recent research demonstrates that LLMs such as GPT-4o and Claude can be manipulated to generate working exploits for enterprise software like Odoo ERP, significantly lowering the barrier for less-skilled attackers to launch sophisticated attacks. Concurrently, the underground market is witnessing the emergence of AI-powered malware tools, such as metamorphic crypters, which use AI to dynamically rewrite malicious code and evade detection by endpoint security solutions like Windows Defender. These developments highlight a rapidly evolving threat landscape where AI is both a tool for attackers and a challenge for defenders. In response to these threats, the cybersecurity community is developing advanced AI-powered penetration testing frameworks like NeuroSploitv2. This tool integrates multiple LLMs and employs specialized agent roles, grounding techniques, and safety guardrails to automate vulnerability discovery and exploitation in a controlled, ethical manner. Meanwhile, defenders are also exploring granular attribute-based access control and post-quantum encryption to mitigate risks from context window injections in AI systems. The convergence of AI in both offensive and defensive security operations underscores the urgent need for robust safeguards and adaptive security strategies to address the dual-use nature of these technologies.
2 months agoMalware Leveraging Large Language Models for Dynamic Capabilities
Security researchers have identified a new trend in which threat actors are embedding large language models (LLMs) directly into malware to enhance its capabilities and evade detection. Akamai Hunt discovered a novel malware strain that disguises its command and control (C2) traffic as legitimate LLM API requests, using Base64-encoded strings to communicate and potentially allowing attackers full control over compromised systems and data exfiltration. This approach enables malicious traffic to blend in with normal AI-related network activity, making detection more challenging for defenders. Further analysis and industry reporting highlight that malware families such as PromptFlux and PromptSteal are now querying LLMs mid-execution to dynamically alter their behavior, obfuscate code, and generate system commands on demand. PromptFlux, for example, uses the Gemini API to regularly re-obfuscate its source code, while PromptSteal leverages the Hugging Face API for real-time reconnaissance and exfiltration commands. These developments underscore the need for organizations to adapt their security controls and detection strategies to address the evolving threat landscape where AI and LLMs are weaponized by attackers.
3 months agoSurge in AI-Driven Cybercrime and Fraud Tactics
Cybercriminals are increasingly leveraging generative AI and large language models (LLMs) to enhance the sophistication, scale, and impact of their attacks. Reports highlight a dramatic rise in advanced phishing, digital fraud, and malware development, with AI enabling attackers to automate social engineering, generate convincing fake identities, and bypass traditional security controls. The use of AI has led to a significant increase in phishing email volume and a 180% surge in advanced fraud attacks, as criminals deploy autonomous bots and deepfake technologies to evade detection and inflict greater damage. Security researchers have observed malware authors integrating LLMs directly into their tools, allowing malicious code to rewrite itself or generate new commands at runtime, further complicating detection efforts. These developments mark a shift from low-effort, opportunistic attacks to highly engineered campaigns that require more resources to execute but yield far greater impact. The rapid adoption of AI by threat actors underscores the urgent need for organizations to reassess their defenses and adapt to the evolving threat landscape.
3 months ago