Skip to main content
Mallory
Mallory

Surge in Deepfake-Driven Fraud and Synthetic Identity Threats

visual deepfakesdeepfakesynthetic identityidentity theftAI scamsfraud detectionfraud operationscross-border fraudfraudgenerative AIscamsvoice cloningthreatsdeceptioncryptocurrency
Updated December 31, 2025 at 01:10 AM4 sources
Surge in Deepfake-Driven Fraud and Synthetic Identity Threats

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Artificial intelligence-powered scams, particularly those leveraging deepfakes and synthetic identities, escalated significantly in 2025. Experts warn that the quality and volume of deepfakes have reached a level where they are nearly indistinguishable from authentic media for most people, enabling fraudsters to deceive victims on a global scale. Voice cloning and visual deepfakes have been used to facilitate large-scale scams, while the emergence of synthetic entities has further blurred the line between real and fake identities, complicating fraud detection for financial institutions.

The misuse of stablecoins and lax cryptocurrency oversight have created new avenues for cross-border fraud, with experts predicting these trends will intensify in 2026. Industry leaders emphasize the urgent need for improved data, reporting, and regulatory measures to counteract these evolving threats. The rapid proliferation of generative AI tools has enabled "pig butchering" scams and other fraud operations to target vast populations, underscoring the growing risk posed by synthetic media and AI-driven deception in the financial sector and beyond.

Sources

December 30, 2025 at 12:00 AM
December 29, 2025 at 12:00 AM
December 29, 2025 at 12:00 AM
December 29, 2025 at 12:00 AM

Related Stories

AI-Driven Scams and Deepfake Threats to Identity Security

AI technologies are rapidly transforming the landscape of cybercrime, enabling scammers to create highly convincing deepfakes and personalized attacks that are increasingly difficult for individuals and organizations to detect. Recent research and industry reports highlight a surge in AI-powered scams, with over 70% of consumers encountering scams in the past year and deepfake audio and video emerging as top concerns. Attackers are leveraging social media as a primary channel to target victims, exploiting the widespread use of mobile devices, which often lack adequate security protections. The sophistication of these attacks is exemplified by incidents such as the $25 million fraud at Arup, where a deepfaked videoconference deceived an employee into transferring company funds. The growing threat of deepfakes and synthetic media is driving a cybersecurity arms race, as organizations struggle to keep pace with evolving attack techniques. Security leaders are increasingly focused on strengthening identity controls, as insurers now scrutinize the maturity and enforcement of identity and access management practices before offering coverage. Research also reveals that current identity document verification systems are hampered by limited and non-diverse training data, making them vulnerable to advanced fraud tactics. As AI continues to lower the barrier for attackers, both technical and human-centric defenses must adapt to counter the risks posed by synthetic identities and technology-enhanced social engineering.

3 months ago
AI-Enabled Social Engineering and Scams Using Deepfakes and Automation

AI-Enabled Social Engineering and Scams Using Deepfakes and Automation

AI is accelerating and scaling social engineering by automating reconnaissance, targeting, and victim engagement, reducing both the cost and skill required to run convincing phishing and fraud campaigns. One reported evolution is the use of **AI agents** to collect open-source intelligence and conduct live, interactive conversations with targets with minimal or no human involvement, enabling high-volume, continuously running scam operations that can adapt in real time. Deepfake-enabled impersonation is further eroding trust in voice and video communications, including calls and meetings, with examples cited of finance staff being deceived into transferring **millions** after interacting with fabricated “executives.” Recommended mitigations emphasize shifting from human-sense validation to process-based controls—e.g., enforced verification procedures, out-of-band checks, shared authentication phrases (“safe words”), and emerging *content provenance* approaches—because traditional, predictable detection models are increasingly strained by the speed, personalization, and adaptability of AI-driven attacks.

1 months ago
AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale

AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale

Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.