Skip to main content
Mallory
Mallory

AI-Enabled Phishing and Malware Delivery Trends

phishing-as-a-servicemalvertisingphishingmalwareinfostealersocial engineeringgenerative aiai-assistedfake documentationgoogle search adslogin pages
Updated March 13, 2026 at 01:21 PM3 sources
AI-Enabled Phishing and Malware Delivery Trends

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Security researchers and industry commentary describe a broader rise in AI-assisted cybercrime, with attackers using generative AI to improve phishing lures, clone legitimate login pages, and scale social-engineering operations. Reporting highlights that phishing remains a leading initial access vector, while phishing-as-a-service and AI-generated content are making campaigns more convincing and easier to produce at volume. IBM similarly warns that AI is acting as a force multiplier for attackers, lowering the cost of malware development and enabling more disposable, harder-to-attribute malicious tooling.

Kaspersky documented active campaigns in which threat actors used Google Search ads and fake documentation pages to distribute the AMOS infostealer on macOS and Amatera on Windows, disguising the malware as popular AI tools including OpenClaw, Claude Code, and Doubao. By contrast, ZDNET's article focuses on the business and product-security shortcomings of Moltbook and OpenClaw acquisitions rather than a specific threat campaign, making it adjacent but not part of the same security event. The material overall is not fluff because it includes substantive threat reporting and technical security analysis, even though the references describe related developments rather than one discrete incident.

Related Entities

Related Stories

AI and Open-Source Ecosystem Abused for Malware Delivery and Agent Manipulation

AI and Open-Source Ecosystem Abused for Malware Delivery and Agent Manipulation

Multiple reports describe threat actors abusing *AI-adjacent* and open-source distribution channels to deliver malware or manipulate automated agents. Straiker STAR Labs reported a **SmartLoader** campaign that trojanized a legitimate-looking **Model Context Protocol (MCP)** server tied to *Oura* by cloning the project, fabricating GitHub credibility (fake forks/contributors), and getting the poisoned server listed in MCP registries; the payload ultimately deployed **StealC** to steal credentials and crypto-wallet data. Separately, researchers observed attackers using trusted platforms and SaaS reputations for delivery and monetization: a fake Android “antivirus” (*TrustBastion*) was hosted via **Hugging Face** repositories to distribute banking/credential-stealing malware, and Trend Micro documented spam/phishing that abused **Atlassian Jira Cloud** email reputation and **Keitaro TDS** redirects to funnel targets (including government/corporate users across multiple language groups) into investment scams and online casinos. In parallel, research highlights emerging risks where **AI agents and AI-enabled workflows become the target or the transport layer**. Check Point demonstrated “**AI as a proxy**,” where web-enabled assistants (e.g., *Grok*, *Microsoft Copilot*) can be coerced into acting as covert **C2 relays**, blending attacker traffic into commonly allowed enterprise destinations, and outlined a trajectory toward prompt-driven, adaptive malware behavior. OpenClaw featured in two distinct security developments: an OpenClaw advisory described a **log-poisoning / indirect prompt-injection** weakness (unsanitized WebSocket headers written to logs that may later be ingested as trusted context), while Hudson Rock reported an infostealer incident that exfiltrated sensitive **OpenClaw configuration artifacts** (e.g., `openclaw.json` tokens, `device.json` keys, and “memory/soul” files), signaling that infostealer operators are beginning to harvest AI-agent identities and automation secrets in addition to browser credentials.

4 weeks ago
AI-Enabled Threats and Security Failures Across Edge Devices, AI Agents, and Infostealer Campaigns

AI-Enabled Threats and Security Failures Across Edge Devices, AI Agents, and Infostealer Campaigns

Threat actors are increasingly operationalizing AI and automation to scale attacks and exploit weak controls across both enterprise and consumer environments. An open-source offensive platform dubbed **CyberStrikeAI**—a Go-based “AI-native security testing” framework integrating 100+ tools—was observed in infrastructure used to target **Fortinet FortiGate** edge devices at scale; researchers linked activity to an IP (212.11.64.250) exposing a `CyberStrikeAI` banner and to scanning/communications patterns consistent with mass exploitation. Separately, a newly disclosed and rapidly patched **OpenClaw** vulnerability showed how AI agent tooling can be hijacked: researchers reported that a malicious website could take over a developer’s locally running agent due to inadequate trust-boundary validation, prompting urgent upgrades to **OpenClaw v2026.2.25+**. In parallel, a “vibe-coding” hosted app on the *Lovable* platform leaked data impacting **18,000+ users** after a researcher found **16 flaws (six critical)** tied to mis-implemented backend controls (including missing/incorrect row-level security in *Supabase*), enabling unauthorized access to records and actions like bulk email and account deletion. Criminal monetization also continues to evolve beyond AI-agent risks. **AuraStealer**, a Russian-language infostealer positioned as a successor/competitor after Lumma disruptions, was advertised on multiple underground forums and is supported by a sizable C2 footprint; analysis of 200+ samples identified **48 C2 domains**, with operators abusing low-cost TLDs (e.g., `.shop`, `.cfd`) and using **Cloudflare** as a reverse proxy to mask origin infrastructure. Broader reporting and commentary reinforced that identity and access failures remain a dominant breach driver and that AI adoption is expanding the attack surface via over-privileged agents and “shadow AI,” while ransomware operators increasingly target recovery paths (including backups) and dwell to corrupt restore points. Several items in the set were non-incident thought leadership or workforce content (skills gap, jobs listings, awards, and general AI security tips) and did not add event-specific technical details beyond high-level risk framing.

2 weeks ago
AI-Enabled Offensive Techniques Accelerate Web Phishing and Vulnerability Exploitation

AI-Enabled Offensive Techniques Accelerate Web Phishing and Vulnerability Exploitation

Security researchers reported an emerging web attack technique that uses **generative AI** to turn a benign webpage into a malicious phishing or credential-stealing page *at runtime*. In a proof of concept attributed to Palo Alto Networks’ **Unit 42**, a “clean” page embeds instructions that trigger calls to public LLM APIs (e.g., Google Gemini, DeepSeek) to generate malicious JavaScript after the victim loads the site; the code is then executed in the browser, leaving little or no static payload to detect. Because the generated content is fetched from trusted AI service domains, the approach can also reduce the effectiveness of some network filtering and static analysis controls. Separately, an Anthropic evaluation highlighted that modern AI models are increasingly capable of conducting **multi-stage network attacks** using only standard, open-source tooling rather than specialized custom toolkits. The write-up notes that Claude Sonnet 4.5 could, in some simulated environments, identify a known public CVE and produce working exploit code quickly enough to exfiltrate sensitive data in an Equifax-like breach simulation, underscoring how AI can compress attacker timelines and increase the importance of fundamentals such as rapid patching and vulnerability management.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.