Skip to main content
Mallory
Mallory

AI-Enabled Cybercrime: Fake ID Generation and Alleged Claude-Assisted Attacks on Mexican Agencies

identity fraudcredentials theftfake iddata theftcounterfeit documentsdeepfakesundercover fbiexploit scriptssocial security cardsonlyfakechatbotsforgeryvulnerability discoverykyc bypasstaxpayer data
Updated March 2, 2026 at 07:01 PM4 sources
AI-Enabled Cybercrime: Fake ID Generation and Alleged Claude-Assisted Attacks on Mexican Agencies

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

A Ukrainian national, Yurii Nazarenko (aliases including “John Wick”), pleaded guilty in U.S. federal court to operating OnlyFake, a subscription-based, AI-powered fake ID service that generated and sold more than 10,000 counterfeit identification images. Prosecutors said the service produced realistic digital versions of U.S. driver’s licenses (all 50 states), U.S. passports/passport cards, Social Security cards, and IDs for dozens of other countries, with options to customize personal details and output style (e.g., scan vs. tabletop photo). Authorities assessed the primary criminal use as bypassing KYC controls at banks and cryptocurrency exchanges; undercover FBI purchases (paid in cryptocurrency) reportedly obtained fake New York IDs, U.S. passports, and a Social Security card, and the site offered bulk packages (up to 1,000 documents) at discounted rates.

Separately, researchers alleged an unknown actor used Anthropic’s Claude chatbot via Spanish-language prompts to support attacks against Mexican government agencies, including identifying vulnerabilities, generating exploit scripts, and automating data theft. According to Gambit Security’s research (as reported by Bloomberg and relayed by DataBreaches.net), the activity ran for about a month starting in December and resulted in the theft of roughly 150 GB of data, including documents tied to taxpayer and voter information, government employee credentials, and civil registry files. While both cases highlight AI’s role in enabling cybercrime and fraud, they describe different actors and incidents rather than a single unified event.

Related Entities

Related Stories

AI-Assisted Intrusions Against Mexican Government Agencies Using Anthropic Claude and OpenAI ChatGPT

AI-Assisted Intrusions Against Mexican Government Agencies Using Anthropic Claude and OpenAI ChatGPT

Researchers at **Gambit Security** reported that a small group of attackers used **LLMs**—including **Anthropic Claude** and **OpenAI ChatGPT**—to help compromise at least **nine Mexican government agencies**, stealing large volumes of sensitive records including **~195 million identity and tax records**, **vehicle registrations**, and **~2.2 million property records**. The attackers reportedly used a long, pre-written “playbook” prompt (about a thousand lines) and social engineering to pose as legitimate penetration testers, bypassing model guardrails quickly and then using the AI tools to identify vulnerabilities, generate exploit scripts, and automate data theft across government networks. Anthropic said it investigated the reported misuse, **disrupted the activity**, and **banned the associated accounts**, and indicated it is feeding examples of the malicious behavior back into model training and deploying additional misuse-detection probes in newer models (e.g., *Claude Opus 4.6*). The incident is being cited as a concrete example of how AI can accelerate attacker workflows—reducing time-to-capability for reconnaissance, exploitation, and automation—while also highlighting the limits of current “guardrails” when adversaries can reframe requests as authorized testing.

1 weeks ago
AI-driven shifts in cybersecurity: agentic AI risks, AI-assisted offensive tradecraft, and evolving cybercriminal ecosystems

AI-driven shifts in cybersecurity: agentic AI risks, AI-assisted offensive tradecraft, and evolving cybercriminal ecosystems

Security reporting and research highlighted how **AI and automation are reshaping both attacker tradecraft and defender operations**, while introducing new enterprise risk. ZDNET described research findings that **agentic AI implementations** from *ServiceNow* and *Microsoft* can be **exploitable**, warning that broadly permissioned agents could enable **lateral movement and privilege escalation** across systems of record if an attacker compromises an agent or chains between agents with different access levels; a **least-privilege** posture for agents was emphasized. Dark Reading separately reported that **AI agents are increasingly augmenting—and in some cases supplanting—human penetration testing** for “low-hanging” vulnerabilities, but that **false positives and the need for human oversight** remain material constraints as agentic testing matures. Threat-intelligence coverage also underscored the **industrialization of cybercrime** and the ecosystems enabling it. CloudSEK detailed the evolution of the English-speaking cybercriminal milieu known as **“The COM,”** tracing its roots in OG-handle trading communities and forum migrations into a service-oriented underground linked to groups such as **Lapsus$**, **ShinyHunters**, **Scattered Spider (UNC3944)**, and **Silent Ransom Group**, and associated activity spanning breaches, extortion, SIM swapping, ransomware, and crypto fraud. SC Media’s commentary similarly described a cyber underground where criminals can readily buy capabilities (credentials, tooling, automation), calling out techniques including **carding** and **ClickFix** social engineering that tricks users into running copied commands to install infostealers. Separately, Dark Reading reported allegations that the **Chronus Group** posted **2.3TB** of purported Mexican government data affecting up to **36 million** people, while Mexico’s **ATDT** disputed it as largely **repackaged data from prior breaches** and said no new sensitive accounts were identified and that impacted systems were primarily **obsolete, third-party-administered** state-level platforms.

1 months ago
Generative AI Accelerates Identity-Based Attacks and Industrialized Fraud Markets

Generative AI Accelerates Identity-Based Attacks and Industrialized Fraud Markets

Security leaders and new research warn that **generative AI** is accelerating a shift toward **identity-based compromise**—notably phishing, social engineering, and impersonation—because traditional controls have reduced the effectiveness of brute-force and other “old-style” attacks. Thales’ Americas CISO Eric Liebowitz argues organizations should respond with stronger identity-focused defenses, including sustained employee training that goes beyond “red flag” spotting, **user behavior baselining** to detect anomalies, and technical controls such as internal AI-assisted defenses and **DLP** to counter increasingly capable *agentic* adversaries. Separate reporting highlights how the same trend is being monetized at scale: AMLTRIX research found an industrialized dark web market for **stolen and fabricated identities**, with “full identity packages” (ID scans plus matching selfies) priced as low as **$30**, enabling repeated account creation for laundering before detection; **pre-verified accounts** command a premium (e.g., verified crypto accounts at **$200–$400**), reflecting the difficulty of defeating live verification. Nametag’s 2026 workforce impersonation findings similarly warn that **deepfake-as-a-service** and readily available AI tooling are making high-value corporate fraud (e.g., spear-phishing and CEO fraud) more accessible, and that **consumer-grade identity verification** will be insufficient against injected deepfakes—driving a need for more continuous, hardware-backed verification and controls that account for emerging risks such as **prompt-injection-based poisoning of AI agent memory**.

2 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.