Malware Leveraging Large Language Models for Dynamic Capabilities
Security researchers have identified a new trend in which threat actors are embedding large language models (LLMs) directly into malware to enhance its capabilities and evade detection. Akamai Hunt discovered a novel malware strain that disguises its command and control (C2) traffic as legitimate LLM API requests, using Base64-encoded strings to communicate and potentially allowing attackers full control over compromised systems and data exfiltration. This approach enables malicious traffic to blend in with normal AI-related network activity, making detection more challenging for defenders.
Further analysis and industry reporting highlight that malware families such as PromptFlux and PromptSteal are now querying LLMs mid-execution to dynamically alter their behavior, obfuscate code, and generate system commands on demand. PromptFlux, for example, uses the Gemini API to regularly re-obfuscate its source code, while PromptSteal leverages the Hugging Face API for real-time reconnaissance and exfiltration commands. These developments underscore the need for organizations to adapt their security controls and detection strategies to address the evolving threat landscape where AI and LLMs are weaponized by attackers.
Sources
Related Stories
Emergence of LLM-Enabled Malware and Defensive Innovations
Security researchers have identified a new wave of threats where adversaries embed Large Language Model (LLM) capabilities directly into malware, enabling malicious code to be generated at runtime and evading traditional detection methods. SentinelLABS highlighted real-world cases such as PromptLock ransomware and APT28’s LameHug/PROMPTSTEAL campaigns, noting that while these threats are adaptive, they often hardcode artifacts like API keys and prompts, which can be leveraged for detection. Novel hunting strategies, including YARA rules for API key structures and prompt detection, have uncovered thousands of LLM-enabled malware samples, including previously unknown threats like MalTerminal. In parallel, security vendors are leveraging LLMs defensively, as seen in NodeZero’s Advanced Data Pilfering (ADP) feature, which uses LLMs to identify hidden credentials and assess the business risk of compromised data. By applying semantic analysis to unstructured data, defenders can better understand what attackers might target and how to prioritize response. These developments underscore both the offensive and defensive potential of LLMs in cybersecurity, with attackers and defenders racing to exploit the technology’s unique capabilities.
4 months agoMalware Leveraging AI for Adaptive Code Generation and Evasion
Malware developers are actively experimenting with artificial intelligence, specifically large language models (LLMs), to create adaptive malware capable of rewriting its own code during execution. Google Threat Intelligence Group has identified malware families such as PromptFlux and PromptSteal that utilize LLMs to dynamically generate, modify, and execute scripts, allowing these threats to evade traditional detection methods. PromptFlux uses Gemini's API to regularly mutate its VBScript payloads, issuing prompts like "Act as an expert VBScript obfuscator" to the model, resulting in self-modifying malware that continually alters its digital fingerprints. PromptSteal, meanwhile, masquerades as an image generator but leverages a hosted LLM to generate and execute one-line Windows commands for data theft and exfiltration, effectively functioning as a live command engine. These AI-driven malware samples are still considered experimental, with limited reliability and persistence compared to traditional threats, but they represent a significant evolution in attack techniques. Notably, PromptSteal was reportedly used by Russia-linked APT28 (also known as BlueDelta, Fancy Bear, and FROZENLAKE) against Ukrainian targets, marking the first observed use of LLMs in live malware operations. The emergence of purpose-built AI tools for cybercrime is lowering the barrier for less sophisticated actors, and researchers warn that the integration of AI into malware development could soon lead to more autonomous, adaptive, and harder-to-detect threats. Google has taken steps to disrupt these operations, but the trend signals a shift toward more unpredictable and rapidly evolving attack patterns.
4 months agoMalicious Use of AI and LLMs for Evasion and C2 in Cyberattacks
Cybercriminals are increasingly leveraging large language models (LLMs) and AI-driven techniques to enhance their attack capabilities and evade detection. Recent research highlights the operationalization of LLM-in-the-loop tradecraft, where malware dynamically generates host-specific PowerShell commands for reconnaissance and data collection, frequently rewriting itself to bypass static and machine learning-based security detections. Attackers are also exploiting stolen API keys and enterprise AI connectors to establish covert command-and-control (C2) channels, disguising malicious activity as legitimate AI traffic. These tactics are being used to target critical infrastructure, with a focus on IT systems that can impact operational technology environments through identity abuse, weak segmentation, and ransomware attacks. In parallel, threat actors are attempting to manipulate AI-based security tools directly. A malicious npm package, `eslint-plugin-unicorn-ts-2`, was discovered embedding a prompt intended to influence the decision-making of AI-driven scanners, while also exfiltrating sensitive environment variables via a post-install script. This approach signals a new trend where attackers not only evade traditional detection but also actively seek to undermine the effectiveness of AI-powered defenses. The emergence of underground markets for malicious LLMs further underscores the growing sophistication and commercialization of AI-enabled cybercrime.
3 months ago