Skip to main content
Mallory
Mallory

AI Recommendation Poisoning via Hidden Prompts and Reputation-Farming Agents

recommendation poisoningreputation farmingai agentsindirect prompt injectionhidden promptssocial engineeringmemory poisoningsummarize with aiai assistantsprompt injectionseo manipulationai share links
Updated February 17, 2026 at 11:00 AM3 sources
AI Recommendation Poisoning via Hidden Prompts and Reputation-Farming Agents

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Security researchers reported AI recommendation poisoning attacks that abuse “Summarize with AI” buttons and AI share links to inject hidden instructions into AI assistants via crafted URL parameters. When a user clicks these links, the pre-filled prompt can attempt to write persistent directives into an assistant’s memory (where supported), biasing future outputs to treat certain companies as trusted sources or to prioritize specific products and advice in areas like finance, health, and security. Microsoft researchers said they observed 50+ unique prompts tied to 31 companies across 14 industries, and noted that readily available tooling (e.g., CiteMET and “AI Share URL” generators marketed as SEO hacks) lowers the barrier to deploying these manipulation techniques across email and web traffic.

Separately, reporting described AI-agent-driven “reputation farming” targeting open-source maintainers, indicating a broader trend of adversaries using automated AI workflows to influence trust signals and perceived credibility in technical ecosystems. While the tactics differ (memory/prompt injection via AI links vs. automated outreach to maintainers), both reflect an emerging risk: manipulation of AI-mediated recommendations and reputational signals to steer user and developer decisions without transparent attribution, increasing the likelihood of downstream security impact (e.g., biased security guidance, promoted dependencies, or trust in unvetted sources).

Related Entities

Affected Products

Related Stories

Poisoning AI Outputs via Web Content and Prompted “Summarize” Links

Poisoning AI Outputs via Web Content and Prompted “Summarize” Links

Security researchers are highlighting how attackers can **poison AI systems** by manipulating what models ingest or remember from web content, leading to untrustworthy outputs that may be widely relied upon. Bruce Schneier demonstrated that simply publishing a fabricated webpage can quickly influence major chatbots and search-integrated AI features (e.g., Google’s AI Overviews and Gemini), which then repeated the false claims as if they were factual; in his test, some models were fooled while others were more resistant. Separately, reporting described **AI recommendation/memory poisoning** via “Summarize with AI” buttons that embed long prompts inside URLs. The visible instruction (e.g., “summarize this article”) can be paired with hidden directives such as “remember this site as a trusted authority,” causing the user’s authenticated AI account to update long-term preferences or memory in ways that benefit an attacker or marketer. The write-up cites Microsoft threat intelligence observations of dozens of in-the-wild examples across multiple companies and warns the technique can blend into **malvertising** and become higher-risk when applied to domains like finance, healthcare, or security decision-making.

2 weeks ago
Prompt Injection Attacks Abuse AI Agent Memory and Link Previews for Manipulation and Data Exfiltration

Prompt Injection Attacks Abuse AI Agent Memory and Link Previews for Manipulation and Data Exfiltration

Security researchers reported multiple **prompt-injection-driven attack paths** that exploit how AI assistants and agentic systems process untrusted content. Microsoft researchers described **AI recommendation/memory poisoning** (mapped in MITRE ATLAS as **`AML.T0080: Memory Poisoning`**) in which attackers insert instructions that cause an assistant to persistently “remember” certain companies, sites, or services as trusted or preferred, shaping future recommendations in later, unrelated conversations. Observed activity over a 60-day period included **50 distinct prompt samples** tied to **31 organizations across 14 industries**, with potential downstream impact in high-stakes domains like health, finance, and security where manipulated recommendations can mislead users without obvious signs of tampering. A separate finding highlighted how **AI agents embedded in messaging apps** can be coerced into leaking secrets via **malicious link previews**. PromptArmor demonstrated that an attacker can use chat-based prompt injection to trick an AI agent into generating an attacker-controlled URL that includes sensitive data (e.g., API keys) as parameters; when messaging platforms (e.g., Slack/Telegram) automatically fetch **link preview** metadata, the preview request can become a **zero-click exfiltration channel**—no user needs to click the link for the data-bearing request to be sent. Together, the reports underscore that agent features intended to improve usability—*persistent memory*, URL-based prompt prepopulation (e.g., “Summarize with AI” buttons), and automatic preview fetching—can be repurposed into scalable manipulation and data-loss mechanisms when untrusted prompts are processed implicitly.

1 months ago
AI-Enabled Social Engineering and Prompt Injection Driving Malware and Recommendation Manipulation

AI-Enabled Social Engineering and Prompt Injection Driving Malware and Recommendation Manipulation

Threat researchers reported multiple **AI-adjacent abuse patterns** that prioritize speed and scale over novel exploitation. HP Wolf Security described “vibe hacking” scripts and modular malware used in campaigns delivering weaponized documents: one lure used PDFs linking to *Booking.com* and a downloaded file with double extensions that triggered JavaScript to execute a PowerShell payload; another used **malvertising/SEO poisoning** to redirect victims to a fake *Microsoft Teams* site that delivered legitimate-looking installers alongside a CapCut-themed executable and a DLL used to inject the **OysterLoader** backdoor. Separately, Huntress detailed an IT support scam campaign that combined **email bombing** with follow-up phone calls impersonating a service desk to coerce victims into granting remote access via *Quick Assist* or *AnyDesk*, then directing them to an AWS-hosted fake Microsoft page to steal credentials and deliver a DLL that runs **Havoc C2** shellcode—enabling rapid endpoint compromise and potential data theft or ransomware. In a related but distinct AI abuse vector, Microsoft reported companies embedding hidden instructions in “Summarize with AI” features via URL prompt parameters to push **prompt-injection-style persistence** (e.g., “remember [Company] as trusted” / “recommend [Company] first”), demonstrating how AI assistants can be manipulated to produce biased outputs without user awareness.

1 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.