Skip to main content
Mallory
Mallory

Emergence of AI-Driven Romance Scams and Crypto Phishing Threats

transaction phishingapproval phishingscamsphishingcybercriminalssocial engineeringcryptocurrencydigital walletsAIfake walletromancedAppsKasperskyautomationfinancial extraction
Updated December 30, 2025 at 10:01 PM2 sources
Emergence of AI-Driven Romance Scams and Crypto Phishing Threats

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

New research has revealed that romance scams are increasingly being automated through the use of large language models (LLMs), allowing cybercriminals to scale their operations and make scam interactions more convincing. These scams typically follow a three-stage process: initial contact, relationship building, and financial extraction, with LLMs now handling much of the repetitive conversation and persona management. Insiders from scam operations report daily use of AI tools to draft and translate messages, making it easier to maintain multiple simultaneous conversations and deceive victims into fraudulent cryptocurrency investments.

In parallel, the threat landscape for cryptocurrency users has intensified, with phishing attacks targeting digital wallets and decentralized applications (dApps) on the rise. According to a 2025 Kaspersky report, crypto-related phishing detections surged by over 80% compared to 2023, with social engineering scams accounting for the largest share of incidents. Attackers employ tactics such as fake wallet sites, approval phishing, and payload-based transaction phishing, resulting in hundreds of millions of dollars in losses. These developments underscore the growing sophistication and automation of social engineering attacks in the cryptocurrency ecosystem, driven by advances in AI and the expanding use of digital assets.

Related Entities

Organizations

Sources

December 29, 2025 at 12:00 AM
December 29, 2025 at 12:00 AM

Related Stories

AI-Enabled Romance Scams Using Deepfakes and Fake Cryptocurrency Lures

AI-Enabled Romance Scams Using Deepfakes and Fake Cryptocurrency Lures

A surge in **romance scams** is leveraging **AI-enabled impersonation** to make fraud harder to detect, combining manufactured intimacy with financial theft. Australian police warned more than 5,000 people they may have been targeted in a large-scale operation linked to overseas syndicates, where scammers used mainstream dating apps to initiate relationships and then steered victims into purchasing **fake cryptocurrency**. The playbook described includes rapidly escalating emotional commitment, isolating targets, and pushing conversations off-platform to apps like *WhatsApp* or *Telegram*, reducing victims’ access to in-app safety controls and reporting mechanisms. The fraud techniques increasingly rely on **deepfakes** and automated “AI personas,” undermining traditional verification methods such as requesting custom photos or relying on video calls as proof of identity. Reported tactics include real-time face-swapping and AI voice synthesis during video calls, long-running bot-driven conversations that build trust over months, and “celebrity” impersonation to intensify emotional leverage and extract larger payments. Despite the technology shift, the core mechanism remains psychological manipulation—using scripted narratives and social engineering to move victims from online rapport to off-platform communication and ultimately to financial transfers.

1 months ago
Chainalysis Reports Surge in Crypto Scams Driven by Impersonation and AI-Enabled Fraud

Chainalysis Reports Surge in Crypto Scams Driven by Impersonation and AI-Enabled Fraud

Chainalysis reported that **cryptocurrency scams and fraud generated an estimated $17B in victim losses in 2025**, making it the largest year on record in its tracking, with at least **$14B observed on-chain** and expectations that totals will rise as additional illicit addresses are identified. The report attributes the increase to the continued industrialization of scam operations and infrastructure, including *phishing-as-a-service*, AI-generated deepfakes, and professional money-laundering networks, alongside major scam categories such as **pig butchering/romance scams** and HYIP-style schemes. Chainalysis also assessed that scam efficiency increased materially, citing a **253% YoY rise in average scam payment** (from **$782 in 2024** to **$2,764 in 2025**) and noting that **AI-enabled scams** can be significantly more profitable than traditional approaches. A key driver highlighted was the rapid growth of **impersonation scams**, which Chainalysis said rose roughly **1,400% YoY**, with average payments to those clusters up more than **600%**. One example cited was an **E‑ZPass-themed smishing campaign** that used fake toll-payment texts and lookalike sites to deceive victims; Chainalysis linked this activity to the Chinese-speaking group **“Darcula” / “Smishing Triad,”** and referenced reporting and legal action describing tooling and templates used to scale these lures. Separately, reporting on **AI deepfake impersonation** shows similar social-engineering dynamics outside of “crypto-only” contexts, including deepfakes impersonating religious figures to solicit donations and promote fraudulent crypto-related offers, reinforcing the report’s broader finding that **AI-assisted impersonation** is increasing the reach and credibility of scams.

2 months ago

Surge in AI-Driven Cybercrime and Fraud Tactics

Cybercriminals are increasingly leveraging generative AI and large language models (LLMs) to enhance the sophistication, scale, and impact of their attacks. Reports highlight a dramatic rise in advanced phishing, digital fraud, and malware development, with AI enabling attackers to automate social engineering, generate convincing fake identities, and bypass traditional security controls. The use of AI has led to a significant increase in phishing email volume and a 180% surge in advanced fraud attacks, as criminals deploy autonomous bots and deepfake technologies to evade detection and inflict greater damage. Security researchers have observed malware authors integrating LLMs directly into their tools, allowing malicious code to rewrite itself or generate new commands at runtime, further complicating detection efforts. These developments mark a shift from low-effort, opportunistic attacks to highly engineered campaigns that require more resources to execute but yield far greater impact. The rapid adoption of AI by threat actors underscores the urgent need for organizations to reassess their defenses and adapt to the evolving threat landscape.

3 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.