Skip to main content
Mallory
Mallory

Criminal Use of AI-Generated Media in Extortion and Deepfake Scams

AI-generateddeepfakevirtual kidnappingtargeted scamsemergency scamsmedia manipulationmanipulated imagesonline abusevideo manipulationFBI warningnonconsensual imageryextortionchild exploitationimage scrapingAI tools
Updated January 12, 2026 at 03:17 PM5 sources
Criminal Use of AI-Generated Media in Extortion and Deepfake Scams

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Criminals are leveraging AI tools to manipulate publicly available images and videos from social media, creating convincing fake 'proof of life' media for use in virtual kidnapping and extortion scams. The FBI has warned that these scams involve contacting victims with claims of having kidnapped a loved one, often accompanied by doctored images or videos to increase credibility and pressure for ransom payment. The ease of accessing personal media online and the sophistication of AI-driven image and video manipulation have made these scams more convincing and difficult to detect, with the FBI noting a rise in such emergency scams and significant financial losses for victims.

The proliferation of AI-generated media has also led to broader concerns about the spread of deepfakes and nonconsensual explicit imagery. Security researchers have uncovered exposed databases from AI image generator startups containing millions of manipulated images, including nonconsensual 'nudified' photos of real people and even children. These developments highlight the growing risks posed by AI-powered media manipulation, both for targeted extortion schemes and for the privacy and safety of individuals whose images are scraped and abused online.

Related Stories

AI-Driven Deepfakes and Their Impact on Cybercrime and Digital Forensics

Artificial intelligence is increasingly being leveraged by both cybercriminals and law enforcement, fundamentally transforming the landscape of cybercrime and digital forensics. AI-powered tools are now capable of detecting cyber threats by recognizing malicious activity patterns and supporting digital forensic investigations, making it easier for specialists to identify relevant evidence such as images and chat logs while minimizing exposure to unrelated or distressing material. However, the same AI technologies are also being exploited by threat actors to create highly realistic deepfakes—synthetic images, videos, and voices—that are difficult to distinguish from genuine content. These deepfakes are used in a variety of malicious campaigns, including misinformation, fraud, identity theft, and sophisticated social engineering attacks. State-sponsored groups from countries like Iran, China, North Korea, and Russia have been documented using AI-generated media for phishing, reconnaissance, and information warfare, with specific examples including Iranian actors impersonating officials and North Korean hackers using fake job interviews to infiltrate organizations. The rapid evolution of deepfake technology has led to the development of advanced AI-powered detection tools that utilize machine learning, computer vision, and biometric analysis to identify manipulated content before it can cause harm. Despite these advances, challenges remain: AI models can struggle with altered media, such as deepfakes, and require constant retraining with supervised, high-quality data to avoid errors and hallucinations. Public concern over the misuse of deepfakes is growing, with surveys indicating that half of young people in the UK fear non-consensual deepfake nudes, and a significant portion of the population worries about financial losses, scams, and unauthorized access to sensitive information facilitated by AI-generated content. The emotional and psychological risks associated with malicious deepfakes are substantial, particularly when individuals or their families are targeted. There is also a notable gap in public understanding of deepfake threats, with a portion of the population unable to identify deepfake calls, underscoring the need for greater education and awareness. Organizations are increasingly adopting AI-powered security awareness training to help employees recognize and respond to evolving social engineering tactics. The dual use of AI in both cybercrime and its detection highlights the urgent need for ongoing collaboration, improved training, and the responsible development of AI technologies to mitigate risks while enhancing digital forensics capabilities. As AI continues to advance, both the sophistication of attacks and the tools to counter them are expected to grow, making vigilance and adaptability essential for cybersecurity professionals and the public alike.

4 months ago

AI-Driven Scams and Deepfake Threats to Identity Security

AI technologies are rapidly transforming the landscape of cybercrime, enabling scammers to create highly convincing deepfakes and personalized attacks that are increasingly difficult for individuals and organizations to detect. Recent research and industry reports highlight a surge in AI-powered scams, with over 70% of consumers encountering scams in the past year and deepfake audio and video emerging as top concerns. Attackers are leveraging social media as a primary channel to target victims, exploiting the widespread use of mobile devices, which often lack adequate security protections. The sophistication of these attacks is exemplified by incidents such as the $25 million fraud at Arup, where a deepfaked videoconference deceived an employee into transferring company funds. The growing threat of deepfakes and synthetic media is driving a cybersecurity arms race, as organizations struggle to keep pace with evolving attack techniques. Security leaders are increasingly focused on strengthening identity controls, as insurers now scrutinize the maturity and enforcement of identity and access management practices before offering coverage. Research also reveals that current identity document verification systems are hampered by limited and non-diverse training data, making them vulnerable to advanced fraud tactics. As AI continues to lower the barrier for attackers, both technical and human-centric defenses must adapt to counter the risks posed by synthetic identities and technology-enhanced social engineering.

3 months ago
AI-Enabled Social Engineering and Scams Using Deepfakes and Automation

AI-Enabled Social Engineering and Scams Using Deepfakes and Automation

AI is accelerating and scaling social engineering by automating reconnaissance, targeting, and victim engagement, reducing both the cost and skill required to run convincing phishing and fraud campaigns. One reported evolution is the use of **AI agents** to collect open-source intelligence and conduct live, interactive conversations with targets with minimal or no human involvement, enabling high-volume, continuously running scam operations that can adapt in real time. Deepfake-enabled impersonation is further eroding trust in voice and video communications, including calls and meetings, with examples cited of finance staff being deceived into transferring **millions** after interacting with fabricated “executives.” Recommended mitigations emphasize shifting from human-sense validation to process-based controls—e.g., enforced verification procedures, out-of-band checks, shared authentication phrases (“safe words”), and emerging *content provenance* approaches—because traditional, predictable detection models are increasingly strained by the speed, personalization, and adaptability of AI-driven attacks.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.