Reprompt One-Click Prompt-Injection Chain Bypassing Microsoft Copilot Guardrails
Varonis Threat Labs disclosed a prompt-injection attack chain dubbed Reprompt that enabled one-click data theft from Microsoft Copilot by abusing how Copilot accepts prompts via a URL. The technique relied on the q URL parameter to auto-populate and execute attacker-supplied instructions when a victim clicked a crafted Copilot link, requiring no plugins, connectors, or additional user-entered prompts. Researchers reported the method could expose sensitive information previously available in the Copilot session, including PII, and could continue exfiltration even after the Copilot chat window was closed.
The reported attack flow chained multiple techniques to bypass Copilot’s protections, including Parameter-to-Prompt (P2P) injection via the q parameter and a double-request bypass in which safeguards applied to an initial request but could be defeated by forcing Copilot to repeat the task, leading to disclosure on a subsequent attempt. Varonis also described chain-request exfiltration to maintain covert control of the session and progressively extract data. Reporting indicates Microsoft took action in response to the research, though the core risk highlighted is that URL-triggered prompt execution and multi-step request chaining can undermine AI assistant guardrails if not consistently enforced across requests and session states.
Related Entities
Organizations
Sources
Related Stories

Reprompt One-Click Data Exfiltration Attack Against Microsoft Copilot Personal
Security researchers disclosed **Reprompt**, a single-click attack against **Microsoft Copilot Personal** that enabled stealthy exfiltration of sensitive user data by abusing a legitimate Copilot URL containing a malicious `q` parameter. After a victim clicks the link (typically delivered via phishing), the crafted URL triggers **Parameter-to-Prompt (P2P) injection**, auto-executing attacker-controlled prompts within the victim’s authenticated Copilot session; researchers reported the attacker can maintain control and continue querying data even after the Copilot window/tab is closed. Microsoft has **patched** the issue. Varonis Threat Labs described an attack chain that combines three techniques to bypass Copilot’s guardrails and evade detection: **P2P injection** (prompt injection via the `q` parameter on page load), **Double-Request** (leveraging the observation that leak protections may apply to the first request but not a repeated action), and **Chain-Request** (server-driven, sequential follow-up prompts that adapt based on prior responses). Reported exposed data includes **personally identifiable information (PII)** and other Copilot-accessible context such as conversation memory and user details (e.g., location and activity/history), with the staged prompting designed to appear benign while progressively leaking information to attacker-controlled infrastructure.
2 months ago
CVE-2026-26133 Cross-Prompt Injection in Microsoft 365 Copilot Email Summarization
Researchers at **Permiso Security** disclosed a **cross-prompt injection** weakness in **Microsoft 365 Copilot** email/Teams summarization features, tracked as **CVE-2026-26133**, that could let attackers embed instruction-like text inside a normal email and influence Copilot’s generated summary. The reported impact is the ability to produce **attacker-authored, convincing phishing content** inside Copilot’s *trusted* summarization UI—without attachments, macros, or traditional exploit code—by exploiting a trust-boundary failure where the model treats untrusted email content as instructions. Microsoft confirmed the issue and rolled out mitigations and a patch across affected surfaces, crediting **Andi Ahmeti** for the discovery. In parallel, Microsoft published operational guidance on **detecting and responding to prompt abuse** in AI tools, emphasizing that prompt injection/abuse is a leading LLM application risk (aligned with **OWASP** guidance) and that detection is difficult without strong **logging and telemetry**. The guidance describes common prompt-abuse patterns (including indirect prompt injection) and provides a practical playbook for investigation and response. A separate Praetorian post provides general AI security best practices (e.g., input validation, monitoring, and human oversight) but does not add incident-specific details about CVE-2026-26133.
5 days ago
Prompt Injection Attacks Abuse AI Agent Memory and Link Previews for Manipulation and Data Exfiltration
Security researchers reported multiple **prompt-injection-driven attack paths** that exploit how AI assistants and agentic systems process untrusted content. Microsoft researchers described **AI recommendation/memory poisoning** (mapped in MITRE ATLAS as **`AML.T0080: Memory Poisoning`**) in which attackers insert instructions that cause an assistant to persistently “remember” certain companies, sites, or services as trusted or preferred, shaping future recommendations in later, unrelated conversations. Observed activity over a 60-day period included **50 distinct prompt samples** tied to **31 organizations across 14 industries**, with potential downstream impact in high-stakes domains like health, finance, and security where manipulated recommendations can mislead users without obvious signs of tampering. A separate finding highlighted how **AI agents embedded in messaging apps** can be coerced into leaking secrets via **malicious link previews**. PromptArmor demonstrated that an attacker can use chat-based prompt injection to trick an AI agent into generating an attacker-controlled URL that includes sensitive data (e.g., API keys) as parameters; when messaging platforms (e.g., Slack/Telegram) automatically fetch **link preview** metadata, the preview request can become a **zero-click exfiltration channel**—no user needs to click the link for the data-bearing request to be sent. Together, the reports underscore that agent features intended to improve usability—*persistent memory*, URL-based prompt prepopulation (e.g., “Summarize with AI” buttons), and automatic preview fetching—can be repurposed into scalable manipulation and data-loss mechanisms when untrusted prompts are processed implicitly.
1 months ago