Skip to main content
Mallory
Mallory

AI-Enabled Fraud Scams Industrialized by Transnational Criminal Networks

fraudhuman traffickingscam compoundsromance scamssocial engineeringgenerative aideepfakessynthetic identitiesimpersonationforced labor
Updated March 17, 2026 at 12:09 AM3 sources
AI-Enabled Fraud Scams Industrialized by Transnational Criminal Networks

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Transnational criminal networks are increasingly industrializing online fraud with AI-enabled social engineering, according to reporting on scam compounds in Southeast Asia, an Interpol assessment, and policy commentary tied to a new US executive order. Fraud operations linked to pig-butchering and romance scams are using generative AI to improve language quality, deepfakes to impersonate trusted people, and low-cost "deepfake-as-a-service" offerings to scale deception. Interpol said AI-assisted fraud is 4.5 times more profitable than non-AI schemes, while broader reporting describes these operations as structured, multinational enterprises that function like businesses and increasingly rely on automation, synthetic identities, and persuasive impersonation at scale.

Reporting from Cambodia and the wider region shows scam operators are now recruiting "AI face models" to appear on high-volume deepfake video calls, including applicants from multiple countries seeking work in compounds associated with trafficking-linked fraud operations. The same ecosystem has been described as part of a broader organized-crime model involving forced labor, cryptocurrency investment scams, romance fraud, and impersonation schemes targeting victims globally. One reference on calculating AI ROI in enterprise cybersecurity is not about this fraud campaign ecosystem, and an EU sanctions announcement concerns separate state-linked cyber incidents rather than financially motivated AI-enabled fraud.

Related Entities

Organizations

Related Stories

AI-Enabled Social Engineering and Scams Using Deepfakes and Automation

AI-Enabled Social Engineering and Scams Using Deepfakes and Automation

AI is accelerating and scaling social engineering by automating reconnaissance, targeting, and victim engagement, reducing both the cost and skill required to run convincing phishing and fraud campaigns. One reported evolution is the use of **AI agents** to collect open-source intelligence and conduct live, interactive conversations with targets with minimal or no human involvement, enabling high-volume, continuously running scam operations that can adapt in real time. Deepfake-enabled impersonation is further eroding trust in voice and video communications, including calls and meetings, with examples cited of finance staff being deceived into transferring **millions** after interacting with fabricated “executives.” Recommended mitigations emphasize shifting from human-sense validation to process-based controls—e.g., enforced verification procedures, out-of-band checks, shared authentication phrases (“safe words”), and emerging *content provenance* approaches—because traditional, predictable detection models are increasingly strained by the speed, personalization, and adaptability of AI-driven attacks.

1 months ago
AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale

AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale

Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.

1 months ago
AI-Enabled Financial Fraud and the Shift to Network Intelligence Defenses

AI-Enabled Financial Fraud and the Shift to Network Intelligence Defenses

Threat actors are increasingly using **generative AI** to industrialize crypto-enabled fraud, turning scams into high-volume, rapidly iterated campaigns that leverage automation, personalization, and synthetic identities. TRM Labs reported illicit crypto transaction volume of **$158B in 2025** (up ~145% YoY) and estimated **~$30B** in scam-related activity, noting an observed **~500% increase** in AI-enabled scam activity over the past year; cited tactics include AI-assisted phishing/impersonation and automation that can accelerate laundering workflows. Despite the speed and scale gains for criminals, TRM emphasized that blockchain transparency still provides defenders an advantage because on-chain activity remains observable for clustering, anomaly detection, and forensic investigation when paired with defensive analytics. Financial institutions are also adjusting fraud detection strategies to better address cross-entity, fast-moving fraud—especially in **instant payments**, where decision windows can be seconds. BankInfoSecurity described a shift from single-institution anomaly detection toward **shared network intelligence** that correlates relationships among accounts, devices, and identities across organizations to identify mule networks and risky counterparties that may appear “new” at one bank but are already flagged elsewhere. The approach is positioned as a new detection surface that complements machine learning by focusing on *connections* and ecosystem visibility, reducing attackers’ ability to exploit intelligence gaps between institutions.

3 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.