Emergence of LLM-Enabled Malware and Defensive Innovations
Security researchers have identified a new wave of threats where adversaries embed Large Language Model (LLM) capabilities directly into malware, enabling malicious code to be generated at runtime and evading traditional detection methods. SentinelLABS highlighted real-world cases such as PromptLock ransomware and APT28’s LameHug/PROMPTSTEAL campaigns, noting that while these threats are adaptive, they often hardcode artifacts like API keys and prompts, which can be leveraged for detection. Novel hunting strategies, including YARA rules for API key structures and prompt detection, have uncovered thousands of LLM-enabled malware samples, including previously unknown threats like MalTerminal.
In parallel, security vendors are leveraging LLMs defensively, as seen in NodeZero’s Advanced Data Pilfering (ADP) feature, which uses LLMs to identify hidden credentials and assess the business risk of compromised data. By applying semantic analysis to unstructured data, defenders can better understand what attackers might target and how to prioritize response. These developments underscore both the offensive and defensive potential of LLMs in cybersecurity, with attackers and defenders racing to exploit the technology’s unique capabilities.
Sources
Related Stories
Malware Leveraging Large Language Models for Dynamic Capabilities
Security researchers have identified a new trend in which threat actors are embedding large language models (LLMs) directly into malware to enhance its capabilities and evade detection. Akamai Hunt discovered a novel malware strain that disguises its command and control (C2) traffic as legitimate LLM API requests, using Base64-encoded strings to communicate and potentially allowing attackers full control over compromised systems and data exfiltration. This approach enables malicious traffic to blend in with normal AI-related network activity, making detection more challenging for defenders. Further analysis and industry reporting highlight that malware families such as PromptFlux and PromptSteal are now querying LLMs mid-execution to dynamically alter their behavior, obfuscate code, and generate system commands on demand. PromptFlux, for example, uses the Gemini API to regularly re-obfuscate its source code, while PromptSteal leverages the Hugging Face API for real-time reconnaissance and exfiltration commands. These developments underscore the need for organizations to adapt their security controls and detection strategies to address the evolving threat landscape where AI and LLMs are weaponized by attackers.
3 months agoMalware Leveraging AI for Adaptive Code Generation and Evasion
Malware developers are actively experimenting with artificial intelligence, specifically large language models (LLMs), to create adaptive malware capable of rewriting its own code during execution. Google Threat Intelligence Group has identified malware families such as PromptFlux and PromptSteal that utilize LLMs to dynamically generate, modify, and execute scripts, allowing these threats to evade traditional detection methods. PromptFlux uses Gemini's API to regularly mutate its VBScript payloads, issuing prompts like "Act as an expert VBScript obfuscator" to the model, resulting in self-modifying malware that continually alters its digital fingerprints. PromptSteal, meanwhile, masquerades as an image generator but leverages a hosted LLM to generate and execute one-line Windows commands for data theft and exfiltration, effectively functioning as a live command engine. These AI-driven malware samples are still considered experimental, with limited reliability and persistence compared to traditional threats, but they represent a significant evolution in attack techniques. Notably, PromptSteal was reportedly used by Russia-linked APT28 (also known as BlueDelta, Fancy Bear, and FROZENLAKE) against Ukrainian targets, marking the first observed use of LLMs in live malware operations. The emergence of purpose-built AI tools for cybercrime is lowering the barrier for less sophisticated actors, and researchers warn that the integration of AI into malware development could soon lead to more autonomous, adaptive, and harder-to-detect threats. Google has taken steps to disrupt these operations, but the trend signals a shift toward more unpredictable and rapidly evolving attack patterns.
4 months agoMalicious LLMs Enable Low-Skilled Attackers with Advanced Cybercrime Tools
Unrestricted large language models (LLMs) such as WormGPT 4 and KawaiiGPT are being leveraged by cybercriminals to generate sophisticated malicious code, including ransomware scripts and phishing messages. Researchers from Palo Alto Networks Unit 42 demonstrated that WormGPT 4, a paid, uncensored ChatGPT variant, can produce functional PowerShell scripts for encrypting files with AES-256, automate data exfiltration via Tor, and craft convincing ransom notes, effectively lowering the barrier for inexperienced hackers to conduct advanced attacks. KawaiiGPT, a free community-driven alternative, was also found to generate well-crafted phishing content and automate lateral movement, further democratizing access to cybercrime capabilities. The proliferation of these malicious LLMs is accelerating the adoption of advanced attack techniques among less skilled threat actors, enabling them to perform operations that previously required significant expertise. The tools are available through paid subscriptions or free local instances, making them accessible to a wider range of cybercriminals. Security researchers warn that the credible linguistic manipulation and automation provided by these LLMs could lead to an increase in the volume and sophistication of cyberattacks, including business email compromise (BEC), phishing, and ransomware campaigns.
3 months ago