Generative AI Used to Produce Malicious JavaScript and Exploit Code
New research highlights how large language models (LLMs) can be operationalized for offensive use, including generating malicious JavaScript and exploit code with limited human involvement. Unit 42 described an AI-augmented runtime assembly technique in which a seemingly benign webpage makes client-side API calls to trusted LLM services to obtain code fragments that are then assembled and executed in the victim’s browser, producing a personalized phishing experience. The approach is designed to be evasive by delivering content from trusted LLM domains, producing polymorphic code per visit, and deferring malicious behavior until runtime—reducing the effectiveness of static and network-only detections.
Separately, an experiment reported by CybersecurityNews described testing GPT-5.2- and Opus 4.5-based systems against a zero-day in the QuickJS JavaScript interpreter, resulting in 40+ distinct exploits across multiple configurations and protection scenarios. The report claims GPT-5.2 solved all presented challenges and that many exploit-generation runs completed in under an hour at relatively modest token costs, suggesting exploit development could increasingly scale with compute and budget rather than scarce expert labor. Together, the reports reinforce that LLMs can be used both for client-side phishing payload generation and for automated vulnerability exploitation, increasing the speed and variability of attacks defenders may face.
Related Entities
Organizations
Sources
Related Stories
Malicious LLMs Enable Low-Skilled Attackers with Advanced Cybercrime Tools
Unrestricted large language models (LLMs) such as WormGPT 4 and KawaiiGPT are being leveraged by cybercriminals to generate sophisticated malicious code, including ransomware scripts and phishing messages. Researchers from Palo Alto Networks Unit 42 demonstrated that WormGPT 4, a paid, uncensored ChatGPT variant, can produce functional PowerShell scripts for encrypting files with AES-256, automate data exfiltration via Tor, and craft convincing ransom notes, effectively lowering the barrier for inexperienced hackers to conduct advanced attacks. KawaiiGPT, a free community-driven alternative, was also found to generate well-crafted phishing content and automate lateral movement, further democratizing access to cybercrime capabilities. The proliferation of these malicious LLMs is accelerating the adoption of advanced attack techniques among less skilled threat actors, enabling them to perform operations that previously required significant expertise. The tools are available through paid subscriptions or free local instances, making them accessible to a wider range of cybercriminals. Security researchers warn that the credible linguistic manipulation and automation provided by these LLMs could lead to an increase in the volume and sophistication of cyberattacks, including business email compromise (BEC), phishing, and ransomware campaigns.
3 months ago
AI-Enabled Offensive Techniques Accelerate Web Phishing and Vulnerability Exploitation
Security researchers reported an emerging web attack technique that uses **generative AI** to turn a benign webpage into a malicious phishing or credential-stealing page *at runtime*. In a proof of concept attributed to Palo Alto Networks’ **Unit 42**, a “clean” page embeds instructions that trigger calls to public LLM APIs (e.g., Google Gemini, DeepSeek) to generate malicious JavaScript after the victim loads the site; the code is then executed in the browser, leaving little or no static payload to detect. Because the generated content is fetched from trusted AI service domains, the approach can also reduce the effectiveness of some network filtering and static analysis controls. Separately, an Anthropic evaluation highlighted that modern AI models are increasingly capable of conducting **multi-stage network attacks** using only standard, open-source tooling rather than specialized custom toolkits. The write-up notes that Claude Sonnet 4.5 could, in some simulated environments, identify a known public CVE and produce working exploit code quickly enough to exfiltrate sensitive data in an Equifax-like breach simulation, underscoring how AI can compress attacker timelines and increase the importance of fundamentals such as rapid patching and vulnerability management.
1 months agoMalware Leveraging AI for Adaptive Code Generation and Evasion
Malware developers are actively experimenting with artificial intelligence, specifically large language models (LLMs), to create adaptive malware capable of rewriting its own code during execution. Google Threat Intelligence Group has identified malware families such as PromptFlux and PromptSteal that utilize LLMs to dynamically generate, modify, and execute scripts, allowing these threats to evade traditional detection methods. PromptFlux uses Gemini's API to regularly mutate its VBScript payloads, issuing prompts like "Act as an expert VBScript obfuscator" to the model, resulting in self-modifying malware that continually alters its digital fingerprints. PromptSteal, meanwhile, masquerades as an image generator but leverages a hosted LLM to generate and execute one-line Windows commands for data theft and exfiltration, effectively functioning as a live command engine. These AI-driven malware samples are still considered experimental, with limited reliability and persistence compared to traditional threats, but they represent a significant evolution in attack techniques. Notably, PromptSteal was reportedly used by Russia-linked APT28 (also known as BlueDelta, Fancy Bear, and FROZENLAKE) against Ukrainian targets, marking the first observed use of LLMs in live malware operations. The emergence of purpose-built AI tools for cybercrime is lowering the barrier for less sophisticated actors, and researchers warn that the integration of AI into malware development could soon lead to more autonomous, adaptive, and harder-to-detect threats. Google has taken steps to disrupt these operations, but the trend signals a shift toward more unpredictable and rapidly evolving attack patterns.
4 months ago