Skip to main content
Mallory
Mallory

AI-Enabled Social Engineering and Prompt Injection Driving Malware and Recommendation Manipulation

recommendation manipulationmalvertisingsocial engineeringmalwareseo poisoningprompt injectionemail bombingweaponized documentscredential theftbackdoorit support scamjavascriptdll sideloadingfake microsoft teamsremote access
Updated March 5, 2026 at 01:14 AM2 sources
AI-Enabled Social Engineering and Prompt Injection Driving Malware and Recommendation Manipulation

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Threat researchers reported multiple AI-adjacent abuse patterns that prioritize speed and scale over novel exploitation. HP Wolf Security described “vibe hacking” scripts and modular malware used in campaigns delivering weaponized documents: one lure used PDFs linking to Booking.com and a downloaded file with double extensions that triggered JavaScript to execute a PowerShell payload; another used malvertising/SEO poisoning to redirect victims to a fake Microsoft Teams site that delivered legitimate-looking installers alongside a CapCut-themed executable and a DLL used to inject the OysterLoader backdoor.

Separately, Huntress detailed an IT support scam campaign that combined email bombing with follow-up phone calls impersonating a service desk to coerce victims into granting remote access via Quick Assist or AnyDesk, then directing them to an AWS-hosted fake Microsoft page to steal credentials and deliver a DLL that runs Havoc C2 shellcode—enabling rapid endpoint compromise and potential data theft or ransomware. In a related but distinct AI abuse vector, Microsoft reported companies embedding hidden instructions in “Summarize with AI” features via URL prompt parameters to push prompt-injection-style persistence (e.g., “remember [Company] as trusted” / “recommend [Company] first”), demonstrating how AI assistants can be manipulated to produce biased outputs without user awareness.

Related Entities

Threat Actors

Malware

Related Stories

Prompt Injection Attacks Abuse AI Agent Memory and Link Previews for Manipulation and Data Exfiltration

Prompt Injection Attacks Abuse AI Agent Memory and Link Previews for Manipulation and Data Exfiltration

Security researchers reported multiple **prompt-injection-driven attack paths** that exploit how AI assistants and agentic systems process untrusted content. Microsoft researchers described **AI recommendation/memory poisoning** (mapped in MITRE ATLAS as **`AML.T0080: Memory Poisoning`**) in which attackers insert instructions that cause an assistant to persistently “remember” certain companies, sites, or services as trusted or preferred, shaping future recommendations in later, unrelated conversations. Observed activity over a 60-day period included **50 distinct prompt samples** tied to **31 organizations across 14 industries**, with potential downstream impact in high-stakes domains like health, finance, and security where manipulated recommendations can mislead users without obvious signs of tampering. A separate finding highlighted how **AI agents embedded in messaging apps** can be coerced into leaking secrets via **malicious link previews**. PromptArmor demonstrated that an attacker can use chat-based prompt injection to trick an AI agent into generating an attacker-controlled URL that includes sensitive data (e.g., API keys) as parameters; when messaging platforms (e.g., Slack/Telegram) automatically fetch **link preview** metadata, the preview request can become a **zero-click exfiltration channel**—no user needs to click the link for the data-bearing request to be sent. Together, the reports underscore that agent features intended to improve usability—*persistent memory*, URL-based prompt prepopulation (e.g., “Summarize with AI” buttons), and automatic preview fetching—can be repurposed into scalable manipulation and data-loss mechanisms when untrusted prompts are processed implicitly.

1 months ago
AI Recommendation Poisoning via Hidden Prompts and Reputation-Farming Agents

AI Recommendation Poisoning via Hidden Prompts and Reputation-Farming Agents

Security researchers reported **AI recommendation poisoning** attacks that abuse “*Summarize with AI*” buttons and AI share links to inject hidden instructions into AI assistants via crafted URL parameters. When a user clicks these links, the pre-filled prompt can attempt to write persistent directives into an assistant’s **memory** (where supported), biasing future outputs to treat certain companies as trusted sources or to prioritize specific products and advice in areas like finance, health, and security. Microsoft researchers said they observed **50+ unique prompts** tied to **31 companies across 14 industries**, and noted that readily available tooling (e.g., *CiteMET* and “AI Share URL” generators marketed as SEO hacks) lowers the barrier to deploying these manipulation techniques across email and web traffic. Separately, reporting described **AI-agent-driven “reputation farming”** targeting **open-source maintainers**, indicating a broader trend of adversaries using automated AI workflows to influence trust signals and perceived credibility in technical ecosystems. While the tactics differ (memory/prompt injection via AI links vs. automated outreach to maintainers), both reflect an emerging risk: **manipulation of AI-mediated recommendations and reputational signals** to steer user and developer decisions without transparent attribution, increasing the likelihood of downstream security impact (e.g., biased security guidance, promoted dependencies, or trust in unvetted sources).

4 weeks ago
AI and Open-Source Ecosystem Abused for Malware Delivery and Agent Manipulation

AI and Open-Source Ecosystem Abused for Malware Delivery and Agent Manipulation

Multiple reports describe threat actors abusing *AI-adjacent* and open-source distribution channels to deliver malware or manipulate automated agents. Straiker STAR Labs reported a **SmartLoader** campaign that trojanized a legitimate-looking **Model Context Protocol (MCP)** server tied to *Oura* by cloning the project, fabricating GitHub credibility (fake forks/contributors), and getting the poisoned server listed in MCP registries; the payload ultimately deployed **StealC** to steal credentials and crypto-wallet data. Separately, researchers observed attackers using trusted platforms and SaaS reputations for delivery and monetization: a fake Android “antivirus” (*TrustBastion*) was hosted via **Hugging Face** repositories to distribute banking/credential-stealing malware, and Trend Micro documented spam/phishing that abused **Atlassian Jira Cloud** email reputation and **Keitaro TDS** redirects to funnel targets (including government/corporate users across multiple language groups) into investment scams and online casinos. In parallel, research highlights emerging risks where **AI agents and AI-enabled workflows become the target or the transport layer**. Check Point demonstrated “**AI as a proxy**,” where web-enabled assistants (e.g., *Grok*, *Microsoft Copilot*) can be coerced into acting as covert **C2 relays**, blending attacker traffic into commonly allowed enterprise destinations, and outlined a trajectory toward prompt-driven, adaptive malware behavior. OpenClaw featured in two distinct security developments: an OpenClaw advisory described a **log-poisoning / indirect prompt-injection** weakness (unsanitized WebSocket headers written to logs that may later be ingested as trusted context), while Hudson Rock reported an infostealer incident that exfiltrated sensitive **OpenClaw configuration artifacts** (e.g., `openclaw.json` tokens, `device.json` keys, and “memory/soul” files), signaling that infostealer operators are beginning to harvest AI-agent identities and automation secrets in addition to browser credentials.

3 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.