AI-Enabled Social Engineering and Scams Using Deepfakes and Automation
AI is accelerating and scaling social engineering by automating reconnaissance, targeting, and victim engagement, reducing both the cost and skill required to run convincing phishing and fraud campaigns. One reported evolution is the use of AI agents to collect open-source intelligence and conduct live, interactive conversations with targets with minimal or no human involvement, enabling high-volume, continuously running scam operations that can adapt in real time.
Deepfake-enabled impersonation is further eroding trust in voice and video communications, including calls and meetings, with examples cited of finance staff being deceived into transferring millions after interacting with fabricated “executives.” Recommended mitigations emphasize shifting from human-sense validation to process-based controls—e.g., enforced verification procedures, out-of-band checks, shared authentication phrases (“safe words”), and emerging content provenance approaches—because traditional, predictable detection models are increasingly strained by the speed, personalization, and adaptability of AI-driven attacks.
Sources
Related Stories
AI-Driven Scams and Deepfake Threats to Identity Security
AI technologies are rapidly transforming the landscape of cybercrime, enabling scammers to create highly convincing deepfakes and personalized attacks that are increasingly difficult for individuals and organizations to detect. Recent research and industry reports highlight a surge in AI-powered scams, with over 70% of consumers encountering scams in the past year and deepfake audio and video emerging as top concerns. Attackers are leveraging social media as a primary channel to target victims, exploiting the widespread use of mobile devices, which often lack adequate security protections. The sophistication of these attacks is exemplified by incidents such as the $25 million fraud at Arup, where a deepfaked videoconference deceived an employee into transferring company funds. The growing threat of deepfakes and synthetic media is driving a cybersecurity arms race, as organizations struggle to keep pace with evolving attack techniques. Security leaders are increasingly focused on strengthening identity controls, as insurers now scrutinize the maturity and enforcement of identity and access management practices before offering coverage. Research also reveals that current identity document verification systems are hampered by limited and non-diverse training data, making them vulnerable to advanced fraud tactics. As AI continues to lower the barrier for attackers, both technical and human-centric defenses must adapt to counter the risks posed by synthetic identities and technology-enhanced social engineering.
3 months agoWidespread Use of AI and Deepfakes in Social Engineering and Cyber Attacks
A recent Gartner survey revealed that 62% of organizations have experienced deepfake attacks within the past year, highlighting the rapid adoption of AI-driven social engineering tactics. These attacks often involve the use of deepfake technology to impersonate executives, tricking employees into transferring funds or divulging sensitive information. Akif Khan of Gartner emphasized that social engineering remains a reliable attack vector, and the introduction of deepfakes makes it even more challenging for employees to detect fraudulent activity. Automated defenses alone are insufficient, as employees are now the frontline defense against these sophisticated impersonation attempts. The survey also found that 32% of organizations faced attacks targeting AI applications, particularly through prompt injection and manipulation of large language models (LLMs). Such adversarial prompting can cause AI chatbots and assistants to generate biased or malicious outputs, further expanding the threat landscape. Flashpoint analysts corroborate these findings, reporting that threat actors are actively discussing and deploying AI-powered tools in underground communities. These include specialized malicious AI models and AI-generated attack plans, which are being used to automate and scale cybercriminal operations. The most immediate threat identified is the use of AI to exploit human psychology, with attackers leveraging AI to create convincing phishing lures and fabricated realities that undermine traditional authentication methods based on voice and visual cues. Financial institutions are particularly vulnerable, as demonstrated by recent incidents where finance workers were deceived by AI-generated content. The rise of 'Dark GPTs' and Attack-as-a-Service (AaaS) offerings on the dark web further illustrates the commercialization and accessibility of AI-driven cybercrime. Security experts recommend a defense-in-depth approach, combining robust technical controls with targeted measures for emerging AI risks. AI-powered security awareness training is increasingly seen as essential, empowering employees to recognize and resist sophisticated social engineering attacks. Over 70,000 organizations are already leveraging such platforms to strengthen their human firewall. As generative AI adoption accelerates, organizations must remain vigilant against both direct deepfake attacks and indirect threats to AI application infrastructure. The evolving threat landscape demands continuous adaptation of security strategies to address the growing use of AI in cybercrime. Proactive threat intelligence and employee education are critical components in mitigating these risks. Organizations are urged to avoid isolated investments and instead implement comprehensive controls tailored to each new category of AI-driven threat. The convergence of deepfake technology, AI-powered phishing, and prompt-based attacks marks a significant escalation in the sophistication and scale of cyber threats facing enterprises today.
5 months ago
AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale
Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.
1 months ago