Adversaries Leverage Gemini AI for Self-Modifying Malware and Data Processing Agents
Google's Threat Intelligence Group (GTIG) has identified a significant evolution in cybercriminal and nation-state tactics, with adversaries now leveraging Gemini AI to develop advanced malware and data processing agents. Notably, groups such as APT42 have experimented with Gemini to create a 'Thinking Robot' malware module capable of rewriting its own code during execution to evade detection, as well as AI agents that process and analyze sensitive personal data for surveillance and intelligence gathering. These developments mark a shift from previous uses of AI for productivity, such as phishing and translation, to direct integration of AI into malware operations.
The experimental PromptFlux malware dropper exemplifies this trend, utilizing Gemini to dynamically generate obfuscated VBScript variants and periodically update its code to bypass antivirus defenses. PromptFlux attempts persistence via Startup folder entries and spreads through removable drives and network shares, while its 'Thinking Robot' module queries Gemini for new evasion techniques. Although PromptFlux is still in early development and not yet capable of causing significant harm, Google has proactively disabled its access to the Gemini API. Other AI-powered malware, such as FruitShell, have also been observed, indicating a broader move toward AI-driven, self-modifying threats in the wild.
Sources
3 more from sources like bleeping computer, the hacker news and help net security
Related Stories
Malware Leveraging AI for Adaptive Code Generation and Evasion
Malware developers are actively experimenting with artificial intelligence, specifically large language models (LLMs), to create adaptive malware capable of rewriting its own code during execution. Google Threat Intelligence Group has identified malware families such as PromptFlux and PromptSteal that utilize LLMs to dynamically generate, modify, and execute scripts, allowing these threats to evade traditional detection methods. PromptFlux uses Gemini's API to regularly mutate its VBScript payloads, issuing prompts like "Act as an expert VBScript obfuscator" to the model, resulting in self-modifying malware that continually alters its digital fingerprints. PromptSteal, meanwhile, masquerades as an image generator but leverages a hosted LLM to generate and execute one-line Windows commands for data theft and exfiltration, effectively functioning as a live command engine. These AI-driven malware samples are still considered experimental, with limited reliability and persistence compared to traditional threats, but they represent a significant evolution in attack techniques. Notably, PromptSteal was reportedly used by Russia-linked APT28 (also known as BlueDelta, Fancy Bear, and FROZENLAKE) against Ukrainian targets, marking the first observed use of LLMs in live malware operations. The emergence of purpose-built AI tools for cybercrime is lowering the barrier for less sophisticated actors, and researchers warn that the integration of AI into malware development could soon lead to more autonomous, adaptive, and harder-to-detect threats. Google has taken steps to disrupt these operations, but the trend signals a shift toward more unpredictable and rapidly evolving attack patterns.
4 months ago
Google Reports Nation-State Hackers Using Gemini AI to Accelerate Reconnaissance and Attack Support
Google’s Threat Intelligence Group (GTIG) reported that multiple **state-backed threat actors** are abusing Google’s *Gemini* generative AI to speed up key phases of the attack lifecycle, particularly **target reconnaissance and profiling**. GTIG said it observed North Korea-linked **UNC2970** using Gemini to synthesize OSINT and build detailed profiles of high-value targets—researching major cybersecurity and defense companies, mapping technical job roles, and even gathering salary information—to support campaign planning and enable more tailored social engineering. GTIG also assessed that other government-aligned groups in **China, North Korea, and Iran** are using Gemini for tasks including coding/scripting, researching publicly known vulnerabilities, and supporting post-compromise activity. One example cited involved a Chinese actor using Gemini to compile information on specific individuals in Pakistan and to collect structural data on separatist organizations; Google said it disabled the assets used in that activity, while noting similar Pakistan-focused targeting persisted. GTIG characterized this AI-enabled workflow as blurring the line between routine research and malicious reconnaissance, allowing actors to move from initial research to active targeting **faster and at broader scale**.
1 months ago
AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks
Group-IB reported that **AI is increasingly being operationalized as “crimeware-as-a-service,”** with weaponized language models and deepfake tooling sold as low-cost, off-the-shelf infrastructure via channels like Telegram. The report cited a sharp rise in dark-web discussion of AI (up **371%** since 2019) and described a growing market for **“Dark LLMs”** (self-hosted models designed for scams and malware, often positioned to run behind Tor and ignore safety controls) priced as low as **$30/month**, alongside commoditized deepfake/impersonation “synthetic identity” kits advertised for around **$5**; Group-IB also attributed **hundreds of millions of dollars in verified losses** to deepfake-enabled fraud in a single quarter. Separate reporting highlighted **enterprise-facing AI risk** from both platform incentives and technical weaknesses. Commentary on the ad-driven direction of consumer AI products warned that monetization and behavioral targeting could increase manipulation and abuse potential, while CSO Online reported a **Google Gemini prompt-injection weakness** that can expose organizations to new classes of data leakage and workflow manipulation when LLMs are connected to enterprise content and actions. A CSO Online “secure browser” comparison piece was largely general guidance and not directly tied to the AI-cybercrime services or the Gemini prompt-injection issue.
1 months ago