Skip to main content
Mallory
Mallory

Reprompt One-Click Data Exfiltration Attack Against Microsoft Copilot Personal

data-exfiltrationCopilot-PersonalCopilotRepromptParameter-to-Promptprompt-injectionone-clickMicrosoftphishingThreat-LabsP2P-injectionVaronisstealthyDouble-Requestpatched
Updated January 15, 2026 at 08:01 PM6 sources
Reprompt One-Click Data Exfiltration Attack Against Microsoft Copilot Personal

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Security researchers disclosed Reprompt, a single-click attack against Microsoft Copilot Personal that enabled stealthy exfiltration of sensitive user data by abusing a legitimate Copilot URL containing a malicious q parameter. After a victim clicks the link (typically delivered via phishing), the crafted URL triggers Parameter-to-Prompt (P2P) injection, auto-executing attacker-controlled prompts within the victim’s authenticated Copilot session; researchers reported the attacker can maintain control and continue querying data even after the Copilot window/tab is closed. Microsoft has patched the issue.

Varonis Threat Labs described an attack chain that combines three techniques to bypass Copilot’s guardrails and evade detection: P2P injection (prompt injection via the q parameter on page load), Double-Request (leveraging the observation that leak protections may apply to the first request but not a repeated action), and Chain-Request (server-driven, sequential follow-up prompts that adapt based on prior responses). Reported exposed data includes personally identifiable information (PII) and other Copilot-accessible context such as conversation memory and user details (e.g., location and activity/history), with the staged prompting designed to appear benign while progressively leaking information to attacker-controlled infrastructure.

Related Stories

Reprompt One-Click Prompt-Injection Chain Bypassing Microsoft Copilot Guardrails

Reprompt One-Click Prompt-Injection Chain Bypassing Microsoft Copilot Guardrails

Varonis Threat Labs disclosed a prompt-injection attack chain dubbed **Reprompt** that enabled one-click data theft from **Microsoft Copilot** by abusing how Copilot accepts prompts via a URL. The technique relied on the `q` URL parameter to auto-populate and execute attacker-supplied instructions when a victim clicked a crafted Copilot link, requiring no plugins, connectors, or additional user-entered prompts. Researchers reported the method could expose sensitive information previously available in the Copilot session, including **PII**, and could continue exfiltration even after the Copilot chat window was closed. The reported attack flow chained multiple techniques to bypass Copilot’s protections, including **Parameter-to-Prompt (P2P) injection** via the `q` parameter and a **double-request bypass** in which safeguards applied to an initial request but could be defeated by forcing Copilot to repeat the task, leading to disclosure on a subsequent attempt. Varonis also described **chain-request exfiltration** to maintain covert control of the session and progressively extract data. Reporting indicates Microsoft took action in response to the research, though the core risk highlighted is that URL-triggered prompt execution and multi-step request chaining can undermine AI assistant guardrails if not consistently enforced across requests and session states.

1 months ago
Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Security researchers reported two distinct Microsoft Copilot-related risks: (1) **cross prompt injection** against *Microsoft Copilot* email summarization surfaces that can cause attacker-supplied text in an email to be treated like instructions, shaping the summary into a convincing in-product “security alert” and creating a phishing path that does not rely on attachments or macros; and (2) **audit-logging gaps in Microsoft Copilot Studio** where certain administrative actions for Copilot Studio agents (e.g., around sharing, authentication, logging, and publication) were not consistently recorded in Microsoft 365’s Unified Audit Log, potentially reducing defenders’ ability to detect malicious or unauthorized agent changes. Permiso described how Copilot’s behavior varies across Outlook’s inline *Summarize* experience, the Outlook Copilot pane/add-in, and Teams-based summarization, with the core risk being **trust transfer**—users may treat Copilot output as system-generated even when it is attacker-influenced—and warned that retrieval across Microsoft 365 (Teams/OneDrive/SharePoint) could amplify impact if chained. Datadog Security Labs stated it reported Copilot Studio logging issues to **MSRC**, that Microsoft remediated logging for the affected events by **October 5, 2025**, and that Datadog later observed a **regression** where some events again failed to log consistently, which it also reported to Microsoft.

5 days ago

Novel Attacks Exploit Microsoft Copilot and Copilot Studio for Data Theft and OAuth Token Compromise

Security researchers have identified two distinct attack techniques targeting Microsoft's AI-powered platforms. The first, dubbed **CoPhish**, leverages Microsoft Copilot Studio agents to deliver fraudulent OAuth consent requests through legitimate Microsoft domains, enabling attackers to steal OAuth tokens. By customizing Copilot Studio chatbots and exploiting the platform's "demo website" feature, attackers can trick users into authenticating with malicious applications, potentially granting unauthorized access to sensitive resources. Microsoft has acknowledged the issue and is working on product updates to mitigate the risk, emphasizing the need for organizations to strengthen governance and consent processes. Separately, a vulnerability in Microsoft 365 Copilot was discovered that allowed attackers to use indirect prompt injection via Mermaid diagrams to exfiltrate sensitive tenant data, such as emails. By embedding malicious instructions in seemingly benign prompts, attackers could manipulate Copilot to retrieve and encode confidential information. Although Microsoft has since patched this flaw, the incident highlights the emerging risks associated with integrating AI assistants and third-party tools, as well as the challenges in securing complex, automated workflows within enterprise environments.

4 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.