Widespread Use of AI and Deepfakes in Social Engineering and Cyber Attacks
A recent Gartner survey revealed that 62% of organizations have experienced deepfake attacks within the past year, highlighting the rapid adoption of AI-driven social engineering tactics. These attacks often involve the use of deepfake technology to impersonate executives, tricking employees into transferring funds or divulging sensitive information. Akif Khan of Gartner emphasized that social engineering remains a reliable attack vector, and the introduction of deepfakes makes it even more challenging for employees to detect fraudulent activity. Automated defenses alone are insufficient, as employees are now the frontline defense against these sophisticated impersonation attempts. The survey also found that 32% of organizations faced attacks targeting AI applications, particularly through prompt injection and manipulation of large language models (LLMs). Such adversarial prompting can cause AI chatbots and assistants to generate biased or malicious outputs, further expanding the threat landscape. Flashpoint analysts corroborate these findings, reporting that threat actors are actively discussing and deploying AI-powered tools in underground communities. These include specialized malicious AI models and AI-generated attack plans, which are being used to automate and scale cybercriminal operations. The most immediate threat identified is the use of AI to exploit human psychology, with attackers leveraging AI to create convincing phishing lures and fabricated realities that undermine traditional authentication methods based on voice and visual cues. Financial institutions are particularly vulnerable, as demonstrated by recent incidents where finance workers were deceived by AI-generated content. The rise of 'Dark GPTs' and Attack-as-a-Service (AaaS) offerings on the dark web further illustrates the commercialization and accessibility of AI-driven cybercrime. Security experts recommend a defense-in-depth approach, combining robust technical controls with targeted measures for emerging AI risks. AI-powered security awareness training is increasingly seen as essential, empowering employees to recognize and resist sophisticated social engineering attacks. Over 70,000 organizations are already leveraging such platforms to strengthen their human firewall. As generative AI adoption accelerates, organizations must remain vigilant against both direct deepfake attacks and indirect threats to AI application infrastructure. The evolving threat landscape demands continuous adaptation of security strategies to address the growing use of AI in cybercrime. Proactive threat intelligence and employee education are critical components in mitigating these risks. Organizations are urged to avoid isolated investments and instead implement comprehensive controls tailored to each new category of AI-driven threat. The convergence of deepfake technology, AI-powered phishing, and prompt-based attacks marks a significant escalation in the sophistication and scale of cyber threats facing enterprises today.
Sources
Related Stories

AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale
Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.
1 months agoEmergence and Impact of AI-Enabled Cyberattacks and Social Engineering
Artificial intelligence is rapidly transforming the cyber threat landscape, with both financially motivated and nation-state actors leveraging AI to enhance the effectiveness and profitability of their attacks. According to Microsoft's Digital Defense Report 2025, phishing emails generated with AI are 4.5 times more likely to deceive recipients, achieving a 54% click-through rate compared to 12% for traditional phishing, and making phishing scams up to 50 times more profitable. Attackers are increasingly using AI not only to craft convincing phishing messages but also to automate multi-stage attack chains, including voice cloning and deepfake videos, which are being adopted by nation-state actors. The report highlights that AI contributed to the rise of ClickFix, which has become the most common initial access vector, accounting for 47% of attacks, surpassing phishing at 35%. Financially motivated operations now represent 52% of all known attacks, while only 4% are tied to espionage, indicating a shift in attacker priorities. Microsoft emphasizes that attackers are now 'logging in, not breaking in,' using AI-enhanced social engineering to compromise accounts through legitimate platforms. In the financial services sector, experts stress the need for robust prevention, detection, and response cycles, and recommend setting strict guardrails before deploying AI tools at scale. The distinction between AI models and AI agents is crucial, as the latter require more oversight due to their autonomous capabilities. Cloud misconfigurations remain a significant risk, underscoring the importance of security-first design in an era of AI-driven threats. The next 12–24 months are expected to see identity attacks, supply chain compromises, and AI-enabled adversaries as the dominant threats to financial institutions. Meanwhile, Chinese state-aligned threat actors have begun experimenting with AI-optimized attack chains, such as using ChatGPT and DeepSeek to generate phishing emails and enhance backdoor malware. However, early results suggest that the effectiveness of AI in the hands of less skilled actors may be limited, as demonstrated by the poor quality of phishing emails produced by the group known as DropPitch. Despite these shortcomings, the trend toward AI-driven cyberattacks is clear, and organizations are urged to adapt their defenses accordingly. The growing sophistication and accessibility of AI tools are expected to incentivize more threat actors to incorporate AI into their operations, raising the stakes for defenders across all sectors. Security leaders are advised to focus on collaboration, intelligence sharing, and continuous improvement of cyber resilience strategies to counter the evolving threat landscape. The convergence of AI with traditional attack vectors is reshaping the priorities and tactics of both attackers and defenders, making AI security a top concern for CISOs and security teams worldwide.
5 months agoAI-Driven Deepfakes and Their Impact on Cybercrime and Digital Forensics
Artificial intelligence is increasingly being leveraged by both cybercriminals and law enforcement, fundamentally transforming the landscape of cybercrime and digital forensics. AI-powered tools are now capable of detecting cyber threats by recognizing malicious activity patterns and supporting digital forensic investigations, making it easier for specialists to identify relevant evidence such as images and chat logs while minimizing exposure to unrelated or distressing material. However, the same AI technologies are also being exploited by threat actors to create highly realistic deepfakes—synthetic images, videos, and voices—that are difficult to distinguish from genuine content. These deepfakes are used in a variety of malicious campaigns, including misinformation, fraud, identity theft, and sophisticated social engineering attacks. State-sponsored groups from countries like Iran, China, North Korea, and Russia have been documented using AI-generated media for phishing, reconnaissance, and information warfare, with specific examples including Iranian actors impersonating officials and North Korean hackers using fake job interviews to infiltrate organizations. The rapid evolution of deepfake technology has led to the development of advanced AI-powered detection tools that utilize machine learning, computer vision, and biometric analysis to identify manipulated content before it can cause harm. Despite these advances, challenges remain: AI models can struggle with altered media, such as deepfakes, and require constant retraining with supervised, high-quality data to avoid errors and hallucinations. Public concern over the misuse of deepfakes is growing, with surveys indicating that half of young people in the UK fear non-consensual deepfake nudes, and a significant portion of the population worries about financial losses, scams, and unauthorized access to sensitive information facilitated by AI-generated content. The emotional and psychological risks associated with malicious deepfakes are substantial, particularly when individuals or their families are targeted. There is also a notable gap in public understanding of deepfake threats, with a portion of the population unable to identify deepfake calls, underscoring the need for greater education and awareness. Organizations are increasingly adopting AI-powered security awareness training to help employees recognize and respond to evolving social engineering tactics. The dual use of AI in both cybercrime and its detection highlights the urgent need for ongoing collaboration, improved training, and the responsible development of AI technologies to mitigate risks while enhancing digital forensics capabilities. As AI continues to advance, both the sophistication of attacks and the tools to counter them are expected to grow, making vigilance and adaptability essential for cybersecurity professionals and the public alike.
4 months ago