Skip to main content
Mallory
Mallory

Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

microsoft copilotcopilot studiocross prompt injectionaudit loggingprompt injectionphishingunified audit logauthenticationdefender visibilitysharepointagent sharingmsrcmicrosoft 365administrative actionsonedrive
Updated March 12, 2026 at 03:05 PM2 sources
Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Security researchers reported two distinct Microsoft Copilot-related risks: (1) cross prompt injection against Microsoft Copilot email summarization surfaces that can cause attacker-supplied text in an email to be treated like instructions, shaping the summary into a convincing in-product “security alert” and creating a phishing path that does not rely on attachments or macros; and (2) audit-logging gaps in Microsoft Copilot Studio where certain administrative actions for Copilot Studio agents (e.g., around sharing, authentication, logging, and publication) were not consistently recorded in Microsoft 365’s Unified Audit Log, potentially reducing defenders’ ability to detect malicious or unauthorized agent changes.

Permiso described how Copilot’s behavior varies across Outlook’s inline Summarize experience, the Outlook Copilot pane/add-in, and Teams-based summarization, with the core risk being trust transfer—users may treat Copilot output as system-generated even when it is attacker-influenced—and warned that retrieval across Microsoft 365 (Teams/OneDrive/SharePoint) could amplify impact if chained. Datadog Security Labs stated it reported Copilot Studio logging issues to MSRC, that Microsoft remediated logging for the affected events by October 5, 2025, and that Datadog later observed a regression where some events again failed to log consistently, which it also reported to Microsoft.

Related Stories

Novel Attacks Exploit Microsoft Copilot and Copilot Studio for Data Theft and OAuth Token Compromise

Security researchers have identified two distinct attack techniques targeting Microsoft's AI-powered platforms. The first, dubbed **CoPhish**, leverages Microsoft Copilot Studio agents to deliver fraudulent OAuth consent requests through legitimate Microsoft domains, enabling attackers to steal OAuth tokens. By customizing Copilot Studio chatbots and exploiting the platform's "demo website" feature, attackers can trick users into authenticating with malicious applications, potentially granting unauthorized access to sensitive resources. Microsoft has acknowledged the issue and is working on product updates to mitigate the risk, emphasizing the need for organizations to strengthen governance and consent processes. Separately, a vulnerability in Microsoft 365 Copilot was discovered that allowed attackers to use indirect prompt injection via Mermaid diagrams to exfiltrate sensitive tenant data, such as emails. By embedding malicious instructions in seemingly benign prompts, attackers could manipulate Copilot to retrieve and encode confidential information. Although Microsoft has since patched this flaw, the incident highlights the emerging risks associated with integrating AI assistants and third-party tools, as well as the challenges in securing complex, automated workflows within enterprise environments.

4 months ago
CVE-2026-26133 Cross-Prompt Injection in Microsoft 365 Copilot Email Summarization

CVE-2026-26133 Cross-Prompt Injection in Microsoft 365 Copilot Email Summarization

Researchers at **Permiso Security** disclosed a **cross-prompt injection** weakness in **Microsoft 365 Copilot** email/Teams summarization features, tracked as **CVE-2026-26133**, that could let attackers embed instruction-like text inside a normal email and influence Copilot’s generated summary. The reported impact is the ability to produce **attacker-authored, convincing phishing content** inside Copilot’s *trusted* summarization UI—without attachments, macros, or traditional exploit code—by exploiting a trust-boundary failure where the model treats untrusted email content as instructions. Microsoft confirmed the issue and rolled out mitigations and a patch across affected surfaces, crediting **Andi Ahmeti** for the discovery. In parallel, Microsoft published operational guidance on **detecting and responding to prompt abuse** in AI tools, emphasizing that prompt injection/abuse is a leading LLM application risk (aligned with **OWASP** guidance) and that detection is difficult without strong **logging and telemetry**. The guidance describes common prompt-abuse patterns (including indirect prompt injection) and provides a practical playbook for investigation and response. A separate Praetorian post provides general AI security best practices (e.g., input validation, monitoring, and human oversight) but does not add incident-specific details about CVE-2026-26133.

5 days ago
Microsoft expands Microsoft 365 Copilot data controls and cross-product data access settings

Microsoft expands Microsoft 365 Copilot data controls and cross-product data access settings

Microsoft is tightening and clarifying how **Copilot** can access and process user and organizational data across the Microsoft ecosystem. Microsoft is expanding **Purview Data Loss Prevention (DLP)** enforcement so policies that block Copilot from processing restricted/sensitivity-labeled content will apply not only to files in **SharePoint** and **OneDrive**, but also to **locally stored** Word, Excel, and PowerPoint documents. The change is planned for deployment via the *Augmentation Loop (AugLoop)* Office component between late March and late April 2026, and is expected to be automatically enabled for organizations already configured to block Copilot from processing labeled content; Microsoft says the update works by allowing the Office client/AugLoop to read sensitivity labels directly rather than relying on Microsoft Graph calls tied to SharePoint/OneDrive URLs. Separately, a Copilot “Memory” setting labeled **“Microsoft usage data”** has been reported as enabling Copilot to reference data from other Microsoft products (including **Bing, MSN, and Edge**) to personalize conversations, with an option for users to disable it if they have privacy concerns. A third, unrelated Microsoft 365 issue—an acknowledged bug in **classic Outlook** that can cause the mouse pointer to disappear—does not materially relate to Copilot data access or DLP controls and appears to be a usability defect rather than a security event.

2 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.