Skip to main content
Mallory
Mallory

Security Risks in AI Coding Assistants: Prompt Injection and Dependency Hijacking

prompt injectiondependency hijackingAIautomationpluginexploitCopilotassistantsecurity controlsriskvulnerabilitysecurity engineercodingagent behaviorMicrosoft
Updated January 6, 2026 at 04:05 PM2 sources
Security Risks in AI Coding Assistants: Prompt Injection and Dependency Hijacking

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Security researchers have identified significant risks in AI-powered coding assistants, including Microsoft's Copilot and Claude Code, stemming from both prompt injection vulnerabilities and the potential for dependency hijacking via third-party plugins. In the case of Copilot, a security engineer disclosed several issues such as prompt injection leading to system prompt leaks, file upload policy bypasses using base64 encoding, and command execution within Copilot's isolated environment. Microsoft, however, has dismissed these findings as limitations of AI rather than true security vulnerabilities, sparking debate within the security community about the definition and handling of such risks.

Separately, analysis of Claude Code highlights the dangers of plugin marketplaces, where third-party 'skills' can be enabled to automate tasks like dependency management. A technical review demonstrated how a seemingly benign plugin could redirect dependency installations to attacker-controlled sources, resulting in the silent introduction of trojanized libraries into development environments. These risks are compounded by the persistent nature of enabled plugins, which can continue to influence agent behavior and potentially compromise projects over time, underscoring the need for greater scrutiny and security controls in AI development tools.

Related Entities

Related Stories

Prompt Injection Vulnerabilities in Microsoft Copilot Studio AI Agents

Security researchers demonstrated that Microsoft Copilot Studio's no-code AI agent platform is susceptible to prompt injection attacks, allowing unauthorized access to sensitive business data. By leveraging the platform's ease of use, even non-technical employees can create AI agents that integrate with critical business systems such as SharePoint, Outlook, and Teams. In controlled tests, researchers were able to extract customer credit card information and manipulate booking systems to create fraudulent transactions, such as booking a $0 vacation, by issuing carefully crafted prompts to the AI agents. The core risk arises from the democratization of AI agent creation, which, while boosting productivity, also increases the attack surface for organizations. The lack of technical safeguards and the inherent vulnerabilities of large language models (LLMs) make it easy for attackers or even well-meaning users to bypass intended security controls. Experts warn that these agentic tools, if not properly secured, can lead to significant data exposure and workflow hijacking, underscoring the urgent need for robust security practices and oversight when deploying AI-powered automation in business environments.

3 months ago

Novel Vulnerabilities and Attack Vectors in AI-Powered IDEs and Coding Assistants

A new class of vulnerabilities, termed "IDEsaster," has been discovered affecting a wide range of AI-powered Integrated Development Environments (IDEs) and coding assistants. Research revealed that over 30 security vulnerabilities, including 24 assigned CVEs, impact more than 10 leading products such as GitHub Copilot, Claude Code, and others, potentially exposing millions of users. The vulnerabilities stem from the integration of AI agents into IDEs, which were not originally designed with such capabilities in mind, leading to attack chains that can result in data exfiltration and remote code execution. Major vendors have issued advisories and updated documentation in response to these findings. Further research highlights the risks associated with the Model Context Protocol (MCP) sampling feature, commonly used in coding copilot applications. Without adequate safeguards, malicious MCP servers can exploit this feature to perform resource theft, hijack conversations, exfiltrate sensitive data, and covertly invoke tools. Proof-of-concept attacks demonstrate that the implicit trust model and lack of robust security controls in MCP can be leveraged for persistent and covert attacks, underscoring the urgent need for improved security measures in AI-driven development environments.

3 months ago

Security Risks and Controls for AI-Powered Coding Assistants and Agents

The rapid adoption of AI-powered agents and coding assistants has introduced new security challenges, particularly as these systems gain deeper access to sensitive enterprise environments and proprietary codebases. Recent research and technical reviews highlight the need for robust information flow control mechanisms to prevent unauthorized data exposure and ensure that AI agents act within defined security boundaries. As AI agents evolve from passive tools to autonomous actors capable of executing workflows, approving access, and interacting with APIs, understanding and modeling their execution and decision-making processes becomes critical for effective risk management. A focused security assessment of the Cursor AI coding assistant revealed three key vulnerabilities related to its deep integration with development workflows and privileged access to code repositories. The review emphasized the importance of ethical hacking and red teaming to uncover risks in third-party AI tools, especially those embedded in widely used platforms like Visual Studio Code. Security practitioners are encouraged to adopt formal models and reusable frameworks for auditing AI agents, ensuring that both the underlying technology and its operational context are thoroughly evaluated for potential threats.

3 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.