Skip to main content
Mallory
Mallory

CVE-2026-26133 Cross-Prompt Injection in Microsoft 365 Copilot Email Summarization

cross-prompt injectionmicrosoft 365 copilotemail summarizationindirect prompt injectionprompt injectionphishingowaspmitigationsdisclosurepatch
Updated March 12, 2026 at 05:08 PM2 sources
CVE-2026-26133 Cross-Prompt Injection in Microsoft 365 Copilot Email Summarization

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Researchers at Permiso Security disclosed a cross-prompt injection weakness in Microsoft 365 Copilot email/Teams summarization features, tracked as CVE-2026-26133, that could let attackers embed instruction-like text inside a normal email and influence Copilot’s generated summary. The reported impact is the ability to produce attacker-authored, convincing phishing content inside Copilot’s trusted summarization UI—without attachments, macros, or traditional exploit code—by exploiting a trust-boundary failure where the model treats untrusted email content as instructions. Microsoft confirmed the issue and rolled out mitigations and a patch across affected surfaces, crediting Andi Ahmeti for the discovery.

In parallel, Microsoft published operational guidance on detecting and responding to prompt abuse in AI tools, emphasizing that prompt injection/abuse is a leading LLM application risk (aligned with OWASP guidance) and that detection is difficult without strong logging and telemetry. The guidance describes common prompt-abuse patterns (including indirect prompt injection) and provides a practical playbook for investigation and response. A separate Praetorian post provides general AI security best practices (e.g., input validation, monitoring, and human oversight) but does not add incident-specific details about CVE-2026-26133.

Related Stories

Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Security researchers reported two distinct Microsoft Copilot-related risks: (1) **cross prompt injection** against *Microsoft Copilot* email summarization surfaces that can cause attacker-supplied text in an email to be treated like instructions, shaping the summary into a convincing in-product “security alert” and creating a phishing path that does not rely on attachments or macros; and (2) **audit-logging gaps in Microsoft Copilot Studio** where certain administrative actions for Copilot Studio agents (e.g., around sharing, authentication, logging, and publication) were not consistently recorded in Microsoft 365’s Unified Audit Log, potentially reducing defenders’ ability to detect malicious or unauthorized agent changes. Permiso described how Copilot’s behavior varies across Outlook’s inline *Summarize* experience, the Outlook Copilot pane/add-in, and Teams-based summarization, with the core risk being **trust transfer**—users may treat Copilot output as system-generated even when it is attacker-influenced—and warned that retrieval across Microsoft 365 (Teams/OneDrive/SharePoint) could amplify impact if chained. Datadog Security Labs stated it reported Copilot Studio logging issues to **MSRC**, that Microsoft remediated logging for the affected events by **October 5, 2025**, and that Datadog later observed a **regression** where some events again failed to log consistently, which it also reported to Microsoft.

5 days ago
Reprompt One-Click Prompt-Injection Chain Bypassing Microsoft Copilot Guardrails

Reprompt One-Click Prompt-Injection Chain Bypassing Microsoft Copilot Guardrails

Varonis Threat Labs disclosed a prompt-injection attack chain dubbed **Reprompt** that enabled one-click data theft from **Microsoft Copilot** by abusing how Copilot accepts prompts via a URL. The technique relied on the `q` URL parameter to auto-populate and execute attacker-supplied instructions when a victim clicked a crafted Copilot link, requiring no plugins, connectors, or additional user-entered prompts. Researchers reported the method could expose sensitive information previously available in the Copilot session, including **PII**, and could continue exfiltration even after the Copilot chat window was closed. The reported attack flow chained multiple techniques to bypass Copilot’s protections, including **Parameter-to-Prompt (P2P) injection** via the `q` parameter and a **double-request bypass** in which safeguards applied to an initial request but could be defeated by forcing Copilot to repeat the task, leading to disclosure on a subsequent attempt. Varonis also described **chain-request exfiltration** to maintain covert control of the session and progressively extract data. Reporting indicates Microsoft took action in response to the research, though the core risk highlighted is that URL-triggered prompt execution and multi-step request chaining can undermine AI assistant guardrails if not consistently enforced across requests and session states.

1 months ago
Prompt-injection RCE risks in agentic AI tools with OS and browser automation

Prompt-injection RCE risks in agentic AI tools with OS and browser automation

Security researchers and CERT/CC reporting highlighted **critical prompt-injection-to-execution paths** in agentic AI systems where untrusted content can be interpreted as instructions and then executed via connected tools. In *ModelScope MS-Agent*, **CVE-2026-2256** (CVSS 9.8) was reported as a **command injection / RCE** issue tied to the framework’s “Shell tool,” where external input is not properly sanitized before being passed to OS command execution; a `check_safe()` denylist-based filter was described as bypassable via obfuscation/alternate syntax, enabling arbitrary command execution and potential full host compromise. Separate research from **Zenity Labs** described a broader class of **agentic AI browser** weaknesses (including Perplexity’s *Comet*) where attackers can hijack autonomous workflows using indirect prompt injection delivered through normal channels such as a **calendar invite**; prior to patches, this could drive the browser to access local files, read directories/files, and exfiltrate data, and in some cases leverage the agent’s existing authenticated context to interact with sensitive services (including password managers). A similar execution-model risk was reported in *Langflow*’s CSV Agent as **CVE-2026-27966** (CVSS 10.0), where `allow_dangerous_code=True` was hardcoded, enabling LangChain’s `python_repl_ast` tool and allowing remote attackers with chat access to coerce **server-side code execution** and full system compromise via prompt injection.

1 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.