Skip to main content
Mallory
Mallory

Escalation of AI-Powered Social Engineering and Scam Attacks

social engineeringscamsphishingransomwareCrowdStrike
Updated October 29, 2025 at 11:00 PM2 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

A recent CrowdStrike survey highlights that 76% of organizations are struggling to keep pace with the sophistication of AI-powered attacks, with 87% considering AI-generated social engineering tactics more convincing than traditional methods. The report notes that phishing remains the leading access vector for ransomware, cited by 45% of victims, and that many organizations overestimate their preparedness, with only a quarter recovering from ransomware attacks within 24 hours. Deepfakes and AI-generated content are expected to become major attack vectors, especially concerning for healthcare organizations and C-level executives.

Globally, scams are on the rise, with Bitdefender and the Global Anti-Scam Alliance reporting that 57% of adults encountered a scam in the past year and annual global scam losses now exceeding $1 trillion. Modern scams increasingly leverage AI-generated voices and deepfake videos to impersonate trusted brands or individuals, and nearly half of all spam messages are now malicious. The persistence of poor security habits, such as password reuse, continues to make individuals and organizations vulnerable to these evolving social engineering threats.

Related Stories

Escalation of AI-Powered Cyberattacks and Social Engineering Threats

Cybersecurity experts and industry reports warn of a significant increase in cyberattacks leveraging artificial intelligence, with a particular focus on social engineering, impersonation, and ransomware. According to Google Cloud Security's Cybersecurity Forecast 2026, threat actors are expected to adopt AI to enhance the speed, scale, and sophistication of attacks, including prompt injection and the use of shadow AI agents. The Verizon Mobile Security Index and Data Breach Investigations Report highlight that human error remains the leading contributor to breaches, with 60% of confirmed incidents involving a human element. AI is also making social engineering attacks, such as smishing and executive impersonation, more effective and harder to detect, especially on mobile devices. Organizations are increasingly concerned about the risks posed by AI-powered attacks, with 34% fearing greater exposure due to AI's sophistication and 38% anticipating more dangerous ransomware. Experts recommend multi-layered defense strategies, improved AI security governance, and the adoption of AI-powered security awareness training to counteract these evolving threats. The convergence of AI-driven offensive tactics and defensive measures underscores the urgent need for CISOs to address both the opportunities and risks presented by AI in cybersecurity operations.

4 months ago

Widespread Use of AI and Deepfakes in Social Engineering and Cyber Attacks

A recent Gartner survey revealed that 62% of organizations have experienced deepfake attacks within the past year, highlighting the rapid adoption of AI-driven social engineering tactics. These attacks often involve the use of deepfake technology to impersonate executives, tricking employees into transferring funds or divulging sensitive information. Akif Khan of Gartner emphasized that social engineering remains a reliable attack vector, and the introduction of deepfakes makes it even more challenging for employees to detect fraudulent activity. Automated defenses alone are insufficient, as employees are now the frontline defense against these sophisticated impersonation attempts. The survey also found that 32% of organizations faced attacks targeting AI applications, particularly through prompt injection and manipulation of large language models (LLMs). Such adversarial prompting can cause AI chatbots and assistants to generate biased or malicious outputs, further expanding the threat landscape. Flashpoint analysts corroborate these findings, reporting that threat actors are actively discussing and deploying AI-powered tools in underground communities. These include specialized malicious AI models and AI-generated attack plans, which are being used to automate and scale cybercriminal operations. The most immediate threat identified is the use of AI to exploit human psychology, with attackers leveraging AI to create convincing phishing lures and fabricated realities that undermine traditional authentication methods based on voice and visual cues. Financial institutions are particularly vulnerable, as demonstrated by recent incidents where finance workers were deceived by AI-generated content. The rise of 'Dark GPTs' and Attack-as-a-Service (AaaS) offerings on the dark web further illustrates the commercialization and accessibility of AI-driven cybercrime. Security experts recommend a defense-in-depth approach, combining robust technical controls with targeted measures for emerging AI risks. AI-powered security awareness training is increasingly seen as essential, empowering employees to recognize and resist sophisticated social engineering attacks. Over 70,000 organizations are already leveraging such platforms to strengthen their human firewall. As generative AI adoption accelerates, organizations must remain vigilant against both direct deepfake attacks and indirect threats to AI application infrastructure. The evolving threat landscape demands continuous adaptation of security strategies to address the growing use of AI in cybercrime. Proactive threat intelligence and employee education are critical components in mitigating these risks. Organizations are urged to avoid isolated investments and instead implement comprehensive controls tailored to each new category of AI-driven threat. The convergence of deepfake technology, AI-powered phishing, and prompt-based attacks marks a significant escalation in the sophistication and scale of cyber threats facing enterprises today.

5 months ago
AI-Enabled Social Engineering Scams Targeting Job Seekers and Businesses

AI-Enabled Social Engineering Scams Targeting Job Seekers and Businesses

Multiple reports highlighted a surge in **AI-enabled social engineering** that blends convincing pretexts with increasingly effective lures to steal credentials, money, or sensitive data. One account described a near-miss **LinkedIn job/recruiter scam** in which an attacker impersonated a recruiter tied to a well-known tech brand and attempted to draw the target into a fraudulent hiring/workflow process, illustrating how professional networking platforms are being used to seed high-trust approaches and extract personal information. Separately, threat reporting cited a sharp rise in **fake CAPTCHA** lures—up **563% over 2025** per *CrowdStrike’s 2026 Global Threat Report*—as attackers shift away from older “malicious browser update” prompts toward CAPTCHA-themed interactions that can trick users into executing malicious steps or handing over access. ESET also warned that **deepfake voice** has lowered the barrier for **CEO/CFO impersonation**, supplier fraud, and account takeover attempts: attackers can clone a voice from short public audio samples (e.g., interviews, earnings calls, social media) and then target finance or helpdesk staff (often identified via LinkedIn) to pressure wire transfers or bypass authentication and KYC checks.

3 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.