AI-Enabled Social Engineering Scams Targeting Job Seekers and Businesses
Multiple reports highlighted a surge in AI-enabled social engineering that blends convincing pretexts with increasingly effective lures to steal credentials, money, or sensitive data. One account described a near-miss LinkedIn job/recruiter scam in which an attacker impersonated a recruiter tied to a well-known tech brand and attempted to draw the target into a fraudulent hiring/workflow process, illustrating how professional networking platforms are being used to seed high-trust approaches and extract personal information.
Separately, threat reporting cited a sharp rise in fake CAPTCHA lures—up 563% over 2025 per CrowdStrike’s 2026 Global Threat Report—as attackers shift away from older “malicious browser update” prompts toward CAPTCHA-themed interactions that can trick users into executing malicious steps or handing over access. ESET also warned that deepfake voice has lowered the barrier for CEO/CFO impersonation, supplier fraud, and account takeover attempts: attackers can clone a voice from short public audio samples (e.g., interviews, earnings calls, social media) and then target finance or helpdesk staff (often identified via LinkedIn) to pressure wire transfers or bypass authentication and KYC checks.
Related Entities
Organizations
Sources
Related Stories

AI-Enabled Social Engineering and Scams Using Deepfakes and Automation
AI is accelerating and scaling social engineering by automating reconnaissance, targeting, and victim engagement, reducing both the cost and skill required to run convincing phishing and fraud campaigns. One reported evolution is the use of **AI agents** to collect open-source intelligence and conduct live, interactive conversations with targets with minimal or no human involvement, enabling high-volume, continuously running scam operations that can adapt in real time. Deepfake-enabled impersonation is further eroding trust in voice and video communications, including calls and meetings, with examples cited of finance staff being deceived into transferring **millions** after interacting with fabricated “executives.” Recommended mitigations emphasize shifting from human-sense validation to process-based controls—e.g., enforced verification procedures, out-of-band checks, shared authentication phrases (“safe words”), and emerging *content provenance* approaches—because traditional, predictable detection models are increasingly strained by the speed, personalization, and adaptability of AI-driven attacks.
1 months agoEscalation of AI-Powered Social Engineering and Scam Attacks
A recent CrowdStrike survey highlights that 76% of organizations are struggling to keep pace with the sophistication of AI-powered attacks, with 87% considering AI-generated social engineering tactics more convincing than traditional methods. The report notes that phishing remains the leading access vector for ransomware, cited by 45% of victims, and that many organizations overestimate their preparedness, with only a quarter recovering from ransomware attacks within 24 hours. Deepfakes and AI-generated content are expected to become major attack vectors, especially concerning for healthcare organizations and C-level executives. Globally, scams are on the rise, with Bitdefender and the Global Anti-Scam Alliance reporting that 57% of adults encountered a scam in the past year and annual global scam losses now exceeding $1 trillion. Modern scams increasingly leverage AI-generated voices and deepfake videos to impersonate trusted brands or individuals, and nearly half of all spam messages are now malicious. The persistence of poor security habits, such as password reuse, continues to make individuals and organizations vulnerable to these evolving social engineering threats.
4 months ago
AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale
Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.
1 months ago