Skip to main content
Mallory
Mallory

Commercialization of Malicious LLMs for Cybercrime

LLMscybercrimecybercrime-as-a-serviceransomwarephishingexploitthreat intelligenceWormGPTattack codePalo Alto Networkspenetration testingTelegram
Updated December 8, 2025 at 11:01 PM2 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Malicious large language models (LLMs) such as WormGPT 4 and KawaiiGPT are now being actively marketed and distributed within cybercrime communities, with WormGPT 4 available for $50 per month on Telegram and KawaiiGPT offered as open source on GitHub. Security researchers from Palo Alto Networks' Unit 42 have analyzed these tools, highlighting their ability to generate functional ransomware code with AES-256 encryption, Tor-based data exfiltration, and scripts for SSH lateral movement, all within seconds. These LLMs are designed without ethical guardrails, enabling threat actors to automate and enhance the quality of attacks, including spear-phishing, payload generation, and real-time execution of malicious code.

The emergence of these offensive LLMs marks a shift from theoretical concerns to practical, commercialized tools that lower the barrier for cybercriminals. The models feature subscription tiers, active user communities, and the ability to generate sophisticated attack code on demand, demonstrating the growing integration of artificial intelligence into the cybercrime-as-a-service ecosystem. Security experts warn that the adoption of such AI-driven tools is likely to accelerate the speed and effectiveness of cyberattacks, posing new challenges for defenders.

Sources

December 8, 2025 at 12:00 AM
December 8, 2025 at 12:00 AM

Related Stories

Malicious LLMs Enable Low-Skilled Attackers with Advanced Cybercrime Tools

Unrestricted large language models (LLMs) such as WormGPT 4 and KawaiiGPT are being leveraged by cybercriminals to generate sophisticated malicious code, including ransomware scripts and phishing messages. Researchers from Palo Alto Networks Unit 42 demonstrated that WormGPT 4, a paid, uncensored ChatGPT variant, can produce functional PowerShell scripts for encrypting files with AES-256, automate data exfiltration via Tor, and craft convincing ransom notes, effectively lowering the barrier for inexperienced hackers to conduct advanced attacks. KawaiiGPT, a free community-driven alternative, was also found to generate well-crafted phishing content and automate lateral movement, further democratizing access to cybercrime capabilities. The proliferation of these malicious LLMs is accelerating the adoption of advanced attack techniques among less skilled threat actors, enabling them to perform operations that previously required significant expertise. The tools are available through paid subscriptions or free local instances, making them accessible to a wider range of cybercriminals. Security researchers warn that the credible linguistic manipulation and automation provided by these LLMs could lead to an increase in the volume and sophistication of cyberattacks, including business email compromise (BEC), phishing, and ransomware campaigns.

3 months ago

AI-Powered Hacking Tools Proliferate on the Dark Web

A growing underground market for AI-powered hacking tools is emerging on dark web forums, according to research from Palo Alto Networks' Unit 42. These tools, including commercialized versions like WormGPT and free models such as KawaiiGPT, are designed to assist cybercriminals with tasks such as vulnerability scanning, data encryption, and generating malicious code. The accessibility and user-friendly nature of these large language models (LLMs) are significantly lowering the technical barriers for cybercrime, enabling even unskilled individuals to create attack scripts and conduct cyberattacks using simple conversational prompts. While the technical sophistication of these "dark LLMs" remains limited, their primary impact is in democratizing cybercrime by empowering low-level hackers and script kiddies. The tools are particularly useful for generating grammatically correct phishing emails and basic malware, especially for users operating across language barriers. Despite initial fears of highly advanced AI-driven cyberattacks, current evidence suggests that these models are more effective at aiding petty criminals than enabling complex, autonomous cyber operations.

3 months ago

Malware Leveraging Large Language Models for Dynamic Capabilities

Security researchers have identified a new trend in which threat actors are embedding large language models (LLMs) directly into malware to enhance its capabilities and evade detection. Akamai Hunt discovered a novel malware strain that disguises its command and control (C2) traffic as legitimate LLM API requests, using Base64-encoded strings to communicate and potentially allowing attackers full control over compromised systems and data exfiltration. This approach enables malicious traffic to blend in with normal AI-related network activity, making detection more challenging for defenders. Further analysis and industry reporting highlight that malware families such as PromptFlux and PromptSteal are now querying LLMs mid-execution to dynamically alter their behavior, obfuscate code, and generate system commands on demand. PromptFlux, for example, uses the Gemini API to regularly re-obfuscate its source code, while PromptSteal leverages the Hugging Face API for real-time reconnaissance and exfiltration commands. These developments underscore the need for organizations to adapt their security controls and detection strategies to address the evolving threat landscape where AI and LLMs are weaponized by attackers.

3 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.