Malware Leveraging AI for Adaptive Code Generation and Evasion
Malware developers are actively experimenting with artificial intelligence, specifically large language models (LLMs), to create adaptive malware capable of rewriting its own code during execution. Google Threat Intelligence Group has identified malware families such as PromptFlux and PromptSteal that utilize LLMs to dynamically generate, modify, and execute scripts, allowing these threats to evade traditional detection methods. PromptFlux uses Gemini's API to regularly mutate its VBScript payloads, issuing prompts like "Act as an expert VBScript obfuscator" to the model, resulting in self-modifying malware that continually alters its digital fingerprints. PromptSteal, meanwhile, masquerades as an image generator but leverages a hosted LLM to generate and execute one-line Windows commands for data theft and exfiltration, effectively functioning as a live command engine.
These AI-driven malware samples are still considered experimental, with limited reliability and persistence compared to traditional threats, but they represent a significant evolution in attack techniques. Notably, PromptSteal was reportedly used by Russia-linked APT28 (also known as BlueDelta, Fancy Bear, and FROZENLAKE) against Ukrainian targets, marking the first observed use of LLMs in live malware operations. The emergence of purpose-built AI tools for cybercrime is lowering the barrier for less sophisticated actors, and researchers warn that the integration of AI into malware development could soon lead to more autonomous, adaptive, and harder-to-detect threats. Google has taken steps to disrupt these operations, but the trend signals a shift toward more unpredictable and rapidly evolving attack patterns.
Sources
2 more from sources like bank info security and the record media
Related Stories
Malware Leveraging Large Language Models for Dynamic Capabilities
Security researchers have identified a new trend in which threat actors are embedding large language models (LLMs) directly into malware to enhance its capabilities and evade detection. Akamai Hunt discovered a novel malware strain that disguises its command and control (C2) traffic as legitimate LLM API requests, using Base64-encoded strings to communicate and potentially allowing attackers full control over compromised systems and data exfiltration. This approach enables malicious traffic to blend in with normal AI-related network activity, making detection more challenging for defenders. Further analysis and industry reporting highlight that malware families such as PromptFlux and PromptSteal are now querying LLMs mid-execution to dynamically alter their behavior, obfuscate code, and generate system commands on demand. PromptFlux, for example, uses the Gemini API to regularly re-obfuscate its source code, while PromptSteal leverages the Hugging Face API for real-time reconnaissance and exfiltration commands. These developments underscore the need for organizations to adapt their security controls and detection strategies to address the evolving threat landscape where AI and LLMs are weaponized by attackers.
3 months agoAdversaries Leverage Gemini AI for Self-Modifying Malware and Data Processing Agents
Google's Threat Intelligence Group (GTIG) has identified a significant evolution in cybercriminal and nation-state tactics, with adversaries now leveraging Gemini AI to develop advanced malware and data processing agents. Notably, groups such as APT42 have experimented with Gemini to create a 'Thinking Robot' malware module capable of rewriting its own code during execution to evade detection, as well as AI agents that process and analyze sensitive personal data for surveillance and intelligence gathering. These developments mark a shift from previous uses of AI for productivity, such as phishing and translation, to direct integration of AI into malware operations. The experimental PromptFlux malware dropper exemplifies this trend, utilizing Gemini to dynamically generate obfuscated VBScript variants and periodically update its code to bypass antivirus defenses. PromptFlux attempts persistence via Startup folder entries and spreads through removable drives and network shares, while its 'Thinking Robot' module queries Gemini for new evasion techniques. Although PromptFlux is still in early development and not yet capable of causing significant harm, Google has proactively disabled its access to the Gemini API. Other AI-powered malware, such as FruitShell, have also been observed, indicating a broader move toward AI-driven, self-modifying threats in the wild.
4 months agoMalicious Use of AI and LLMs for Evasion and C2 in Cyberattacks
Cybercriminals are increasingly leveraging large language models (LLMs) and AI-driven techniques to enhance their attack capabilities and evade detection. Recent research highlights the operationalization of LLM-in-the-loop tradecraft, where malware dynamically generates host-specific PowerShell commands for reconnaissance and data collection, frequently rewriting itself to bypass static and machine learning-based security detections. Attackers are also exploiting stolen API keys and enterprise AI connectors to establish covert command-and-control (C2) channels, disguising malicious activity as legitimate AI traffic. These tactics are being used to target critical infrastructure, with a focus on IT systems that can impact operational technology environments through identity abuse, weak segmentation, and ransomware attacks. In parallel, threat actors are attempting to manipulate AI-based security tools directly. A malicious npm package, `eslint-plugin-unicorn-ts-2`, was discovered embedding a prompt intended to influence the decision-making of AI-driven scanners, while also exfiltrating sensitive environment variables via a post-install script. This approach signals a new trend where attackers not only evade traditional detection but also actively seek to undermine the effectiveness of AI-powered defenses. The emergence of underground markets for malicious LLMs further underscores the growing sophistication and commercialization of AI-enabled cybercrime.
3 months ago