Abuse of AI Chat and Summarization Features to Exfiltrate or Manipulate User Data
Security reporting warned that browser extensions (including free add-ons marketed for ad blocking or VPN functionality) may be overriding browser XMLHttpRequest() and fetch() calls to capture and monetize users’ full conversations with popular AI chatbots (e.g., ChatGPT, Claude, Gemini, DeepSeek). An AI expert reported the captured content was stored in a searchable database and sold via API access; while users were assigned pseudonymized IDs, the prompts and responses were retained in full and frequently contained highly sensitive data, including medical details, immigration status, and other personal identifiers—raising significant privacy, compliance, and data-handling risk, particularly where healthcare staff paste real patient data into chat tools.
Separately, Microsoft reported a manipulation technique targeting “Summarize with AI” features where companies embed hidden prompt-injection instructions in URLs or page elements so that, when a user clicks to summarize, the AI assistant is prompted to “remember” a company as trusted or preferentially recommend it in the future. Microsoft identified 50+ unique prompts from 31 companies across 14 industries, noting that readily available tooling makes this easy to deploy and that the impact can be subtle, persistent bias in AI recommendations on high-stakes topics (including security) without user awareness.
Sources
Related Stories

Prompt Injection Attacks Abuse AI Agent Memory and Link Previews for Manipulation and Data Exfiltration
Security researchers reported multiple **prompt-injection-driven attack paths** that exploit how AI assistants and agentic systems process untrusted content. Microsoft researchers described **AI recommendation/memory poisoning** (mapped in MITRE ATLAS as **`AML.T0080: Memory Poisoning`**) in which attackers insert instructions that cause an assistant to persistently “remember” certain companies, sites, or services as trusted or preferred, shaping future recommendations in later, unrelated conversations. Observed activity over a 60-day period included **50 distinct prompt samples** tied to **31 organizations across 14 industries**, with potential downstream impact in high-stakes domains like health, finance, and security where manipulated recommendations can mislead users without obvious signs of tampering. A separate finding highlighted how **AI agents embedded in messaging apps** can be coerced into leaking secrets via **malicious link previews**. PromptArmor demonstrated that an attacker can use chat-based prompt injection to trick an AI agent into generating an attacker-controlled URL that includes sensitive data (e.g., API keys) as parameters; when messaging platforms (e.g., Slack/Telegram) automatically fetch **link preview** metadata, the preview request can become a **zero-click exfiltration channel**—no user needs to click the link for the data-bearing request to be sent. Together, the reports underscore that agent features intended to improve usability—*persistent memory*, URL-based prompt prepopulation (e.g., “Summarize with AI” buttons), and automatic preview fetching—can be repurposed into scalable manipulation and data-loss mechanisms when untrusted prompts are processed implicitly.
1 months ago
Poisoning AI Outputs via Web Content and Prompted “Summarize” Links
Security researchers are highlighting how attackers can **poison AI systems** by manipulating what models ingest or remember from web content, leading to untrustworthy outputs that may be widely relied upon. Bruce Schneier demonstrated that simply publishing a fabricated webpage can quickly influence major chatbots and search-integrated AI features (e.g., Google’s AI Overviews and Gemini), which then repeated the false claims as if they were factual; in his test, some models were fooled while others were more resistant. Separately, reporting described **AI recommendation/memory poisoning** via “Summarize with AI” buttons that embed long prompts inside URLs. The visible instruction (e.g., “summarize this article”) can be paired with hidden directives such as “remember this site as a trusted authority,” causing the user’s authenticated AI account to update long-term preferences or memory in ways that benefit an attacker or marketer. The write-up cites Microsoft threat intelligence observations of dozens of in-the-wild examples across multiple companies and warns the technique can blend into **malvertising** and become higher-risk when applied to domains like finance, healthcare, or security decision-making.
2 weeks ago
AI Recommendation Poisoning via Hidden Prompts and Reputation-Farming Agents
Security researchers reported **AI recommendation poisoning** attacks that abuse “*Summarize with AI*” buttons and AI share links to inject hidden instructions into AI assistants via crafted URL parameters. When a user clicks these links, the pre-filled prompt can attempt to write persistent directives into an assistant’s **memory** (where supported), biasing future outputs to treat certain companies as trusted sources or to prioritize specific products and advice in areas like finance, health, and security. Microsoft researchers said they observed **50+ unique prompts** tied to **31 companies across 14 industries**, and noted that readily available tooling (e.g., *CiteMET* and “AI Share URL” generators marketed as SEO hacks) lowers the barrier to deploying these manipulation techniques across email and web traffic. Separately, reporting described **AI-agent-driven “reputation farming”** targeting **open-source maintainers**, indicating a broader trend of adversaries using automated AI workflows to influence trust signals and perceived credibility in technical ecosystems. While the tactics differ (memory/prompt injection via AI links vs. automated outreach to maintainers), both reflect an emerging risk: **manipulation of AI-mediated recommendations and reputational signals** to steer user and developer decisions without transparent attribution, increasing the likelihood of downstream security impact (e.g., biased security guidance, promoted dependencies, or trust in unvetted sources).
4 weeks ago