Skip to main content
Mallory
Mallory

Google Reports Nation-State Hackers Using Gemini AI to Accelerate Reconnaissance and Attack Support

google threat intelligence groupvulnerability researchcybersecurity companiesgenerative aigeminireconnaissancesocial engineeringdefense companiestarget profilingpost-compromisescriptingattack lifecyclenorth korea
Updated February 13, 2026 at 11:00 PM6 sources
Google Reports Nation-State Hackers Using Gemini AI to Accelerate Reconnaissance and Attack Support

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Google’s Threat Intelligence Group (GTIG) reported that multiple state-backed threat actors are abusing Google’s Gemini generative AI to speed up key phases of the attack lifecycle, particularly target reconnaissance and profiling. GTIG said it observed North Korea-linked UNC2970 using Gemini to synthesize OSINT and build detailed profiles of high-value targets—researching major cybersecurity and defense companies, mapping technical job roles, and even gathering salary information—to support campaign planning and enable more tailored social engineering.

GTIG also assessed that other government-aligned groups in China, North Korea, and Iran are using Gemini for tasks including coding/scripting, researching publicly known vulnerabilities, and supporting post-compromise activity. One example cited involved a Chinese actor using Gemini to compile information on specific individuals in Pakistan and to collect structural data on separatist organizations; Google said it disabled the assets used in that activity, while noting similar Pakistan-focused targeting persisted. GTIG characterized this AI-enabled workflow as blurring the line between routine research and malicious reconnaissance, allowing actors to move from initial research to active targeting faster and at broader scale.

Related Entities

Malware

Related Stories

Adversaries Leverage Gemini AI for Self-Modifying Malware and Data Processing Agents

Google's Threat Intelligence Group (GTIG) has identified a significant evolution in cybercriminal and nation-state tactics, with adversaries now leveraging Gemini AI to develop advanced malware and data processing agents. Notably, groups such as APT42 have experimented with Gemini to create a 'Thinking Robot' malware module capable of rewriting its own code during execution to evade detection, as well as AI agents that process and analyze sensitive personal data for surveillance and intelligence gathering. These developments mark a shift from previous uses of AI for productivity, such as phishing and translation, to direct integration of AI into malware operations. The experimental PromptFlux malware dropper exemplifies this trend, utilizing Gemini to dynamically generate obfuscated VBScript variants and periodically update its code to bypass antivirus defenses. PromptFlux attempts persistence via Startup folder entries and spreads through removable drives and network shares, while its 'Thinking Robot' module queries Gemini for new evasion techniques. Although PromptFlux is still in early development and not yet capable of causing significant harm, Google has proactively disabled its access to the Gemini API. Other AI-powered malware, such as FruitShell, have also been observed, indicating a broader move toward AI-driven, self-modifying threats in the wild.

4 months ago
Google GTIG Warns of Intensifying Nation-State Targeting of the Defense Industrial Base

Google GTIG Warns of Intensifying Nation-State Targeting of the Defense Industrial Base

Google’s Threat Intelligence Group (GTIG) reported sustained and expanding cyber operations against the **defense industrial base (DIB)** by state-linked and aligned actors from **China, Iran, North Korea, and Russia**, driven by battlefield technology demands and geopolitical conflict. Reported themes include targeting defense organizations supporting the Russia–Ukraine war, **social engineering and recruitment/hiring-process abuse** aimed at employees (notably attributed to North Korean and Iranian activity), increased reliance on **edge devices and appliances** for initial access by China-nexus groups, and heightened **supply-chain exposure** tied to compromises in adjacent manufacturing ecosystems. The reporting highlights specific tactics and actor activity, including Russia-linked **APT44 (Sandworm)** efforts to access data from **Telegram and Signal**, including use of a Windows batch script (`WAVESIGN`) to decrypt and exfiltrate data from Signal Desktop after likely obtaining physical access to devices in Ukraine. Additional activity described includes Ukraine-focused campaigns using defense-themed lures (e.g., drones and counter-drone systems) and broader nation-state use of **zero-day exploitation in edge devices** to establish footholds in defense contractors’ networks, reinforcing GTIG’s assessment that “pre-positioning” and continuous access-building are now baseline expectations for DIB organizations.

1 months ago
AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks

AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks

Group-IB reported that **AI is increasingly being operationalized as “crimeware-as-a-service,”** with weaponized language models and deepfake tooling sold as low-cost, off-the-shelf infrastructure via channels like Telegram. The report cited a sharp rise in dark-web discussion of AI (up **371%** since 2019) and described a growing market for **“Dark LLMs”** (self-hosted models designed for scams and malware, often positioned to run behind Tor and ignore safety controls) priced as low as **$30/month**, alongside commoditized deepfake/impersonation “synthetic identity” kits advertised for around **$5**; Group-IB also attributed **hundreds of millions of dollars in verified losses** to deepfake-enabled fraud in a single quarter. Separate reporting highlighted **enterprise-facing AI risk** from both platform incentives and technical weaknesses. Commentary on the ad-driven direction of consumer AI products warned that monetization and behavioral targeting could increase manipulation and abuse potential, while CSO Online reported a **Google Gemini prompt-injection weakness** that can expose organizations to new classes of data leakage and workflow manipulation when LLMs are connected to enterprise content and actions. A CSO Online “secure browser” comparison piece was largely general guidance and not directly tied to the AI-cybercrime services or the Gemini prompt-injection issue.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.