Skip to main content
Mallory
Mallory

Surge in AI-Driven Cybercrime and Fraud Tactics

cybercrimedigital fraudadvanced fraudadvanced phishingidentity theftphishing volumeattack techniquesthreat landscapephishingfraudphishing emailsocial engineeringAImalwareautomation
Updated November 27, 2025 at 06:07 AM4 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Cybercriminals are increasingly leveraging generative AI and large language models (LLMs) to enhance the sophistication, scale, and impact of their attacks. Reports highlight a dramatic rise in advanced phishing, digital fraud, and malware development, with AI enabling attackers to automate social engineering, generate convincing fake identities, and bypass traditional security controls. The use of AI has led to a significant increase in phishing email volume and a 180% surge in advanced fraud attacks, as criminals deploy autonomous bots and deepfake technologies to evade detection and inflict greater damage.

Security researchers have observed malware authors integrating LLMs directly into their tools, allowing malicious code to rewrite itself or generate new commands at runtime, further complicating detection efforts. These developments mark a shift from low-effort, opportunistic attacks to highly engineered campaigns that require more resources to execute but yield far greater impact. The rapid adoption of AI by threat actors underscores the urgent need for organizations to reassess their defenses and adapt to the evolving threat landscape.

Related Entities

Malware

Sources

November 27, 2025 at 12:00 AM
November 26, 2025 at 12:00 AM
November 26, 2025 at 12:00 AM

Related Stories

AI-Driven Online Fraud and Credential Theft Campaigns

Cybercriminals are increasingly leveraging advanced AI technologies, including large language models (LLMs) and agentic AI, to automate and scale online fraud, abuse, and credential theft campaigns. These AI-driven attacks enable adversaries to craft convincing phishing emails, create fake websites, and even execute deepfake voice or video calls, making it more difficult for organizations to detect and defend against malicious activity. The rise of agentic AI, which can autonomously gather inputs, evaluate options, and take actions such as infiltrating networks and stealing credentials, marks a significant escalation in attacker sophistication and persistence. Recent research highlights a 300% increase in AI-powered bot traffic, complicating the application and API threat landscape and lowering the barrier to entry for cybercriminals through fraud-as-a-service (FaaS) offerings. These developments have led to a surge in digital fraud and abuse, impacting key industries and regions globally. Organizations are advised to adopt AI-driven defenses and maintain regulatory compliance to counteract the growing threat posed by malicious AI bots and automated credential theft campaigns.

4 months ago

AI-Enhanced Phishing Campaigns and Modern Social Engineering Tactics

Cybercriminals are increasingly leveraging artificial intelligence and advanced social engineering techniques to conduct sophisticated phishing campaigns targeting both individuals and organizations. Recent reports highlight a surge in phishing attacks that utilize AI and machine learning to craft highly personalized and convincing lures, making detection more challenging for traditional security tools. Attackers are now able to scrape social media for personal data, generate emails in a target’s native language, and automate the creation of malicious content, all with minimal effort. One notable campaign tracked since February targets social media and marketing professionals by impersonating well-known brands such as Tesla, Red Bull, and Ferrari, enticing victims to upload resumes under the guise of job opportunities. These emails employ subtle psychological tactics, such as reducing urgency to build trust, and use multi-step processes to create an illusion of legitimacy. Another observed campaign used AI to obfuscate malicious payloads within SVG files, making them harder for security filters to detect. In this case, attackers sent phishing emails from compromised small business accounts, posing as file-sharing notifications, and used self-addressed email tactics to bypass basic detection heuristics. If recipients opened the attached file, they were redirected to credential-stealing websites. Microsoft researchers noted that the complexity and structure of the malicious code suggested it was generated by a large language model, rather than written by a human. The adoption of AI by threat actors is part of a broader trend, with both defenders and attackers racing to outpace each other in the use of transformative technologies. Security experts emphasize the importance of a layered defense, recommending strong passwords, multi-factor authentication, regular software updates, and ongoing user training to identify and report suspicious content. The rise of AI-driven phishing has increased the frequency and sophistication of attacks, with some security centers now detecting a malicious email every 42 seconds. Organizations are urged to remain vigilant, as even basic threat actors can now execute complex attacks with the help of AI tools. The evolving threat landscape underscores the need for proactive monitoring, rapid incident response, and continuous education to mitigate the risks posed by these advanced phishing campaigns. As attackers continue to refine their methods, defenders must adapt by leveraging AI for detection and response, and by fostering a security-aware culture among users. The convergence of AI and phishing represents a significant escalation in cyber risk, demanding heightened attention from both technical and non-technical stakeholders.

5 months ago

Emergence and Impact of AI-Enabled Cyberattacks and Social Engineering

Artificial intelligence is rapidly transforming the cyber threat landscape, with both financially motivated and nation-state actors leveraging AI to enhance the effectiveness and profitability of their attacks. According to Microsoft's Digital Defense Report 2025, phishing emails generated with AI are 4.5 times more likely to deceive recipients, achieving a 54% click-through rate compared to 12% for traditional phishing, and making phishing scams up to 50 times more profitable. Attackers are increasingly using AI not only to craft convincing phishing messages but also to automate multi-stage attack chains, including voice cloning and deepfake videos, which are being adopted by nation-state actors. The report highlights that AI contributed to the rise of ClickFix, which has become the most common initial access vector, accounting for 47% of attacks, surpassing phishing at 35%. Financially motivated operations now represent 52% of all known attacks, while only 4% are tied to espionage, indicating a shift in attacker priorities. Microsoft emphasizes that attackers are now 'logging in, not breaking in,' using AI-enhanced social engineering to compromise accounts through legitimate platforms. In the financial services sector, experts stress the need for robust prevention, detection, and response cycles, and recommend setting strict guardrails before deploying AI tools at scale. The distinction between AI models and AI agents is crucial, as the latter require more oversight due to their autonomous capabilities. Cloud misconfigurations remain a significant risk, underscoring the importance of security-first design in an era of AI-driven threats. The next 12–24 months are expected to see identity attacks, supply chain compromises, and AI-enabled adversaries as the dominant threats to financial institutions. Meanwhile, Chinese state-aligned threat actors have begun experimenting with AI-optimized attack chains, such as using ChatGPT and DeepSeek to generate phishing emails and enhance backdoor malware. However, early results suggest that the effectiveness of AI in the hands of less skilled actors may be limited, as demonstrated by the poor quality of phishing emails produced by the group known as DropPitch. Despite these shortcomings, the trend toward AI-driven cyberattacks is clear, and organizations are urged to adapt their defenses accordingly. The growing sophistication and accessibility of AI tools are expected to incentivize more threat actors to incorporate AI into their operations, raising the stakes for defenders across all sectors. Security leaders are advised to focus on collaboration, intelligence sharing, and continuous improvement of cyber resilience strategies to counter the evolving threat landscape. The convergence of AI with traditional attack vectors is reshaping the priorities and tactics of both attackers and defenders, making AI security a top concern for CISOs and security teams worldwide.

5 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.