Skip to main content
Mallory
Mallory

DIG AI: Uncensored Darknet AI Assistant Used for Cybercrime and Illicit Activities

DIG AIdarknetcybercrimemalicious operationsthreat detectionillicit purposesorganized crimeillegal activitiesCSAMautomationjailbrokensynthetic contentsecurity challengesprivacy violationscustom-built
Updated December 18, 2025 at 08:01 PM2 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

A new uncensored AI assistant known as DIG AI has emerged on darknet forums, rapidly gaining popularity among cybercriminals and organized crime groups. Security researchers observed a significant increase in the use of DIG AI during Q4 2025, particularly over the Winter Holidays, coinciding with a global surge in illegal activity. DIG AI, along with other "dark LLMs" such as FraudGPT and WormGPT, enables threat actors to automate and scale malicious operations, including cybercrime, extremism, privacy violations, and the spread of misinformation. These tools are often jailbroken or custom-built large language models with safety restrictions removed, making them attractive for illicit purposes.

DIG AI is accessible via the Tor network, making it difficult for law enforcement to detect and disrupt its use. The tool can generate instructions for a range of illegal activities, from explosive device manufacturing to the creation of child sexual abuse material (CSAM), including hyper-realistic synthetic content. The rise of such AI-powered tools presents new challenges for security professionals and legislators, especially with major global events like the 2026 Winter Olympics and FIFA World Cup on the horizon, as criminals may exploit these technologies to bypass content protection and scale their operations.

Related Entities

Organizations

Related Stories

DIG AI: Uncensored Darknet AI Tool Empowering Cybercriminals

Researchers at Resecurity have uncovered DIG AI, a powerful and uncensored artificial intelligence tool hosted on the darknet, which is being actively used by cybercriminals to automate sophisticated cyberattacks, generate illicit content, and bypass the ethical safeguards present in mainstream AI models. The tool, first detected in late September 2025, has rapidly gained popularity among threat actors, particularly during the winter holiday season, and is promoted by a darknet actor known as "Pitch." DIG AI offers a suite of specialized models, including an unrestricted text/code generator and an image model for deepfakes, all accessible anonymously via the Tor network without registration requirements. Investigators demonstrated the tool's ability to generate obfuscated malicious code, such as JavaScript backdoors, highlighting its potential to lower the barrier for launching advanced attacks. The emergence of DIG AI marks a significant escalation in the criminal use of artificial intelligence, raising concerns about the increased automation and sophistication of cyber threats. Security experts warn that the tool's capabilities could be leveraged to target major global events in 2026, such as the Winter Olympics and FIFA World Cup, and that its existence signals a broader trend toward the "criminalization of AI." The tool's promotion alongside other illicit goods on underground forums further underscores the convergence of AI and cybercrime, presenting new challenges for defenders and law enforcement agencies worldwide.

2 months ago

AI-Powered Hacking Tools Proliferate on the Dark Web

A growing underground market for AI-powered hacking tools is emerging on dark web forums, according to research from Palo Alto Networks' Unit 42. These tools, including commercialized versions like WormGPT and free models such as KawaiiGPT, are designed to assist cybercriminals with tasks such as vulnerability scanning, data encryption, and generating malicious code. The accessibility and user-friendly nature of these large language models (LLMs) are significantly lowering the technical barriers for cybercrime, enabling even unskilled individuals to create attack scripts and conduct cyberattacks using simple conversational prompts. While the technical sophistication of these "dark LLMs" remains limited, their primary impact is in democratizing cybercrime by empowering low-level hackers and script kiddies. The tools are particularly useful for generating grammatically correct phishing emails and basic malware, especially for users operating across language barriers. Despite initial fears of highly advanced AI-driven cyberattacks, current evidence suggests that these models are more effective at aiding petty criminals than enabling complex, autonomous cyber operations.

3 months ago
AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks

AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks

Group-IB reported that **AI is increasingly being operationalized as “crimeware-as-a-service,”** with weaponized language models and deepfake tooling sold as low-cost, off-the-shelf infrastructure via channels like Telegram. The report cited a sharp rise in dark-web discussion of AI (up **371%** since 2019) and described a growing market for **“Dark LLMs”** (self-hosted models designed for scams and malware, often positioned to run behind Tor and ignore safety controls) priced as low as **$30/month**, alongside commoditized deepfake/impersonation “synthetic identity” kits advertised for around **$5**; Group-IB also attributed **hundreds of millions of dollars in verified losses** to deepfake-enabled fraud in a single quarter. Separate reporting highlighted **enterprise-facing AI risk** from both platform incentives and technical weaknesses. Commentary on the ad-driven direction of consumer AI products warned that monetization and behavioral targeting could increase manipulation and abuse potential, while CSO Online reported a **Google Gemini prompt-injection weakness** that can expose organizations to new classes of data leakage and workflow manipulation when LLMs are connected to enterprise content and actions. A CSO Online “secure browser” comparison piece was largely general guidance and not directly tied to the AI-cybercrime services or the Gemini prompt-injection issue.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.