Skip to main content
Mallory
Mallory

AI-Enabled Financial Fraud and the Shift to Network Intelligence Defenses

fraud detectionnetwork intelligencefinancial fraudon-chain forensicsshared intelligencefinancial institutionscrypto scamsblockchain analyticsphishingmoney launderingdevice fingerprintingillicit transactionsautomationgenerative aianomaly detection
Updated February 24, 2026 at 10:00 AM3 sources
AI-Enabled Financial Fraud and the Shift to Network Intelligence Defenses

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Threat actors are increasingly using generative AI to industrialize crypto-enabled fraud, turning scams into high-volume, rapidly iterated campaigns that leverage automation, personalization, and synthetic identities. TRM Labs reported illicit crypto transaction volume of $158B in 2025 (up ~145% YoY) and estimated ~$30B in scam-related activity, noting an observed ~500% increase in AI-enabled scam activity over the past year; cited tactics include AI-assisted phishing/impersonation and automation that can accelerate laundering workflows. Despite the speed and scale gains for criminals, TRM emphasized that blockchain transparency still provides defenders an advantage because on-chain activity remains observable for clustering, anomaly detection, and forensic investigation when paired with defensive analytics.

Financial institutions are also adjusting fraud detection strategies to better address cross-entity, fast-moving fraud—especially in instant payments, where decision windows can be seconds. BankInfoSecurity described a shift from single-institution anomaly detection toward shared network intelligence that correlates relationships among accounts, devices, and identities across organizations to identify mule networks and risky counterparties that may appear “new” at one bank but are already flagged elsewhere. The approach is positioned as a new detection surface that complements machine learning by focusing on connections and ecosystem visibility, reducing attackers’ ability to exploit intelligence gaps between institutions.

Related Entities

Organizations

Sources

February 23, 2026 at 12:00 AM
February 23, 2026 at 12:00 AM

Related Stories

AI-Driven Financial Fraud and Phishing Campaigns Targeting Financial Services

Financial services organizations are facing a surge in sophisticated fraud attempts enabled by artificial intelligence, as threat actors leverage AI tools to automate and scale their attacks. Recent reports highlight that AI is being used to craft highly convincing phishing emails and social engineering campaigns, making it increasingly difficult for traditional security measures to detect malicious activity. Attackers are utilizing generative AI to personalize messages, mimic legitimate communications, and evade standard email filters, thereby increasing the success rate of phishing attempts. In response, financial institutions are adopting advanced AI-powered security solutions designed to identify and block these next-generation threats. These defensive tools analyze behavioral patterns, detect anomalies, and adapt to evolving attack techniques, providing a dynamic shield against AI-driven fraud. The deployment of agentic AI systems allows organizations to automate threat detection and response, reducing the window of opportunity for attackers. Security teams are also leveraging machine learning to monitor transaction patterns and flag suspicious activities in real time, helping to prevent unauthorized transfers and account takeovers. The integration of AI into both offensive and defensive cyber operations marks a significant escalation in the financial fraud landscape. Experts warn that as AI technology becomes more accessible, the volume and complexity of attacks will continue to rise, necessitating ongoing investment in AI-based defenses. Training and awareness programs are being updated to educate employees about the risks posed by AI-generated phishing and social engineering. Regulatory bodies are also beginning to issue guidance on the ethical use of AI in financial services, emphasizing the need for transparency and accountability. Collaboration between industry stakeholders is increasing, with information sharing initiatives aimed at identifying emerging AI-driven threats. The rapid evolution of AI capabilities underscores the importance of proactive security strategies and continuous monitoring. Financial organizations are urged to assess their current defenses and consider the adoption of agentic AI tools to stay ahead of adversaries. The convergence of AI in both attack and defense highlights a new era in cybersecurity, where automation and intelligence are central to both risk and resilience. As the threat landscape evolves, the ability to rapidly detect and respond to AI-enabled fraud will be a key differentiator for secure financial operations.

5 months ago
AI-Enabled Fraud Scams Industrialized by Transnational Criminal Networks

AI-Enabled Fraud Scams Industrialized by Transnational Criminal Networks

**Transnational criminal networks** are increasingly industrializing online fraud with **AI-enabled social engineering**, according to reporting on scam compounds in Southeast Asia, an Interpol assessment, and policy commentary tied to a new US executive order. Fraud operations linked to *pig-butchering* and romance scams are using generative AI to improve language quality, deepfakes to impersonate trusted people, and low-cost "deepfake-as-a-service" offerings to scale deception. Interpol said AI-assisted fraud is **4.5 times more profitable** than non-AI schemes, while broader reporting describes these operations as structured, multinational enterprises that function like businesses and increasingly rely on automation, synthetic identities, and persuasive impersonation at scale. Reporting from Cambodia and the wider region shows scam operators are now recruiting "**AI face models**" to appear on high-volume deepfake video calls, including applicants from multiple countries seeking work in compounds associated with trafficking-linked fraud operations. The same ecosystem has been described as part of a broader organized-crime model involving forced labor, cryptocurrency investment scams, romance fraud, and impersonation schemes targeting victims globally. One reference on calculating AI ROI in enterprise cybersecurity is **not about this fraud campaign ecosystem**, and an EU sanctions announcement concerns separate state-linked cyber incidents rather than financially motivated AI-enabled fraud.

Today

AI-Driven Cyber Threats and the Evolution of Fraud and Defense Tactics

Cybercriminals are increasingly leveraging artificial intelligence, automation, and stolen credentials to conduct large-scale, sophisticated attacks across multiple sectors. The 2025 holiday season is seeing a surge in fraud campaigns that begin earlier than ever, with attackers using AI to mimic legitimate consumer behavior, automate credential stuffing, and bypass traditional detection systems. Underground marketplaces now efficiently trade automation kits and malicious configurations, making fraud a continuous, data-driven threat rather than one limited to peak shopping periods. Security experts warn that organizations relying solely on heightened monitoring during traditional high-risk windows are at greater risk, as adversaries pre-position and refine their attack infrastructure well in advance. To counter these evolving threats, cybersecurity leaders emphasize the need for predictive and adaptive defense systems powered by AI. Rather than relying on reactive measures, organizations are urged to operationalize threat intelligence by integrating machine learning, behavioral analytics, and automation into their security operations. This approach enables real-time detection, contextual analysis, and rapid response, bridging the gap between intelligence collection and incident containment. However, experts caution that AI must be paired with human oversight and strong governance to ensure trust, transparency, and effective decision-making in the face of increasingly polymorphic and evasive attacks.

4 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.