Skip to main content
Mallory
Mallory

AI-Driven Phishing Surge and Mobile Phishing Exposure

mobile phishingphishingai-generatedsmartphonesimpersonationhr luresopen redirectsconsumer
Updated March 14, 2026 at 12:17 AM2 sources
AI-Driven Phishing Surge and Mobile Phishing Exposure

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Research and industry reporting indicate that phishing campaigns are increasingly using AI-generated content, with one large-scale analysis finding a sharp jump during the 2025 holiday period and sustained elevated levels into early 2026. Hoxhunt reported that AI-generated phishing rose from under 5% of observed attempts for most of 2025 to 56% during the December holiday season, then remained at 40% in January 2026. The same reporting also highlighted a 50-fold increase in malicious SVG attachments, along with common lures including free rewards, financial-service impersonation, fake invoices, HR-themed messages, malicious links, and the use of open redirects to obscure destination infrastructure.

Separate commentary on smartphone security reinforces that phishing remains the most common consumer mobile threat, with Omdia survey data showing 27% of consumers experienced phishing scams and higher rates in English-speaking countries such as the United States and United Kingdom. Omdia also found that sophisticated phishing attacks frequently bypass on-device protections, underscoring that AI is improving attacker tradecraft faster than many user-facing defenses can adapt. A separate SC Media perspective on AI jailbreaking is not about the phishing trend itself; it focuses on broader AI application security failures and misuse of AI systems, including exposed child data and model abuse by a state-backed group.

Related Stories

AI-Driven Phishing and Social Engineering Threats in 2025-2026

Security researchers and industry experts are warning of a dramatic escalation in phishing and social engineering attacks, driven by the adoption of AI by both attackers and defenders. Reports highlight that threat actors are leveraging AI to craft highly targeted, convincing phishing emails, automate attack campaigns, and reduce the time from initial compromise to full breach to under an hour. Human Resources-themed phishing, especially termination and compensation adjustment lures, have surged in Q3 and Q4, exploiting employee trust and urgency. Security teams are urged to maintain a human-in-the-loop approach, as over-reliance on AI for detection can create blind spots, and context-driven analysis is now essential to counter increasingly sophisticated tactics. Technical research and incident analysis reveal that attackers are using a variety of new techniques, including voicemail lures, open redirects, and legitimate hosting platforms to bypass traditional email security controls. The rise of mobile device attacks, supply chain threats via malicious apps, and the use of AI prompt injection in CI/CD pipelines further expand the attack surface. Experts recommend organizations strengthen mobile security, enrich detection with threat intelligence, and ensure skilled analysts remain involved in incident response to keep pace with the evolving threat landscape.

3 months ago

AI-Enhanced Phishing Campaigns and Modern Social Engineering Tactics

Cybercriminals are increasingly leveraging artificial intelligence and advanced social engineering techniques to conduct sophisticated phishing campaigns targeting both individuals and organizations. Recent reports highlight a surge in phishing attacks that utilize AI and machine learning to craft highly personalized and convincing lures, making detection more challenging for traditional security tools. Attackers are now able to scrape social media for personal data, generate emails in a target’s native language, and automate the creation of malicious content, all with minimal effort. One notable campaign tracked since February targets social media and marketing professionals by impersonating well-known brands such as Tesla, Red Bull, and Ferrari, enticing victims to upload resumes under the guise of job opportunities. These emails employ subtle psychological tactics, such as reducing urgency to build trust, and use multi-step processes to create an illusion of legitimacy. Another observed campaign used AI to obfuscate malicious payloads within SVG files, making them harder for security filters to detect. In this case, attackers sent phishing emails from compromised small business accounts, posing as file-sharing notifications, and used self-addressed email tactics to bypass basic detection heuristics. If recipients opened the attached file, they were redirected to credential-stealing websites. Microsoft researchers noted that the complexity and structure of the malicious code suggested it was generated by a large language model, rather than written by a human. The adoption of AI by threat actors is part of a broader trend, with both defenders and attackers racing to outpace each other in the use of transformative technologies. Security experts emphasize the importance of a layered defense, recommending strong passwords, multi-factor authentication, regular software updates, and ongoing user training to identify and report suspicious content. The rise of AI-driven phishing has increased the frequency and sophistication of attacks, with some security centers now detecting a malicious email every 42 seconds. Organizations are urged to remain vigilant, as even basic threat actors can now execute complex attacks with the help of AI tools. The evolving threat landscape underscores the need for proactive monitoring, rapid incident response, and continuous education to mitigate the risks posed by these advanced phishing campaigns. As attackers continue to refine their methods, defenders must adapt by leveraging AI for detection and response, and by fostering a security-aware culture among users. The convergence of AI and phishing represents a significant escalation in cyber risk, demanding heightened attention from both technical and non-technical stakeholders.

5 months ago
AI-Enabled Phishing at Scale and Defensive Implications

AI-Enabled Phishing at Scale and Defensive Implications

Threat actors are increasingly using **AI to industrialize phishing**, generating high volumes of near-unique emails and rapidly iterating lures, links, and attachments in ways that degrade the effectiveness of signature-based and gateway-centric controls. Cofense-reported telemetry cited in industry coverage indicates enterprises saw **one malicious email on average every 19 seconds during 2025**, with campaigns often reusing underlying infrastructure even as message content continuously mutates. Phishing sites are also becoming more adaptive, tailoring content and payload delivery based on the victim’s device and environment (e.g., different outcomes for Windows, macOS, and mobile), while collecting detailed browser and system attributes to support customization and evasion. This shift is driving executive concern and shaping security investment priorities for 2026, with broader industry reporting highlighting **AI-enabled attacks**, fraud, and phishing as top risks and positioning **AI-enabled security** as a key countermeasure to keep pace with adversaries’ automation. Separately, an opinion-focused piece argues that AI changes the “build vs. buy” calculus for security teams by enabling more internal tool development and altering what types of security products deliver value; however, it does not provide incident-specific or phishing-specific intelligence. Overall, the most actionable signal across the sources is the operational reality of AI-driven phishing volume, adaptive delivery, and evasion—reinforcing the need to prioritize resilient detection and response capabilities over static indicators alone.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.