Skip to main content
Mallory
Mallory

Risks and Security Challenges of Shadow AI Agents in Enterprise Environments

Updated October 8, 2025 at 03:00 PM4 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Organizations are rapidly adopting AI-powered tools and agents across business processes, often without adequate oversight or security controls. As AI agents become more autonomous, they are increasingly granted access to sensitive systems, data, and workflows, sometimes without formal approval or visibility from IT and security teams. This phenomenon, known as 'Shadow AI,' introduces significant blind spots for traditional security tools, as these agents can operate with hidden identities and privileges. Studies have shown that a large proportion of enterprise employees use generative AI tools like ChatGPT, frequently pasting sensitive information such as personally identifiable information (PII) and payment card data into these platforms, often through unmanaged personal accounts. This uncontrolled usage creates substantial risks of data leakage, compliance violations, and potential misuse of corporate data for AI model training. Security research highlights that 45 percent of enterprise employees use generative AI tools, with 77 percent of those users copying and pasting data into chatbots, and 22 percent of those pastes containing PII or PCI data. Furthermore, 40 percent of file uploads to generative AI sites include sensitive data, with a significant portion coming from non-corporate accounts, making it difficult for organizations to monitor or control data exfiltration. The rise of autonomous AI agents, capable of acting independently and integrating with APIs and workflows, further complicates the security landscape, as these agents can trigger actions and access data without direct human oversight. Industry experts warn that unchecked proliferation of AI agents could lead to disastrous consequences, including unauthorized access to business processes and sensitive information. The OpenID Foundation and other organizations are calling for the development of AI-specific identity and access management standards to address these risks. Ethical considerations are also paramount, as the design and deployment of AI agents must prioritize principles such as transparency, accountability, and alignment with human values to prevent costly errors and security incidents. Security leaders are urged to extend governance practices to cover AI agents, implement robust monitoring and access controls, and foster a culture of cybersecurity awareness to mitigate the risks posed by shadow AI. The convergence of technical, regulatory, and ethical challenges underscores the urgent need for coordinated action to secure the expanding ecosystem of AI agents within enterprises.

Sources

October 7, 2025 at 08:18 PM
security boulevard
Better Angels of AI Agents
October 7, 2025 at 12:00 AM

Related Stories

Shadow AI and the Risks of Unapproved AI Tool Adoption in Enterprises

Organizations are facing a growing challenge as employees increasingly adopt AI tools and agents without formal IT approval, a phenomenon known as shadow AI. This unsanctioned use of AI—ranging from chatbots and large language models to low-code agents—enables employees to automate workflows and make decisions outside traditional governance structures. The lack of oversight and visibility into these autonomous systems exposes enterprises to significant risks, as sensitive data may be processed or shared through unvetted platforms, and decisions may be influenced by tools that operate beyond established security controls. Recent research highlights that 73% of employees use AI for work, yet over a third do not consistently follow company policies, and many are unaware of existing guidelines. About 27% admit to using unapproved AI tools, often browser-based and free, making them difficult for IT to monitor. This shadow AI trend compounds the broader issue of shadow IT and SaaS sprawl, where employees bypass official channels to access tools that better meet their needs. Security teams are advised to shift from outright bans to strategies focused on discovery, communication, and oversight to manage these risks effectively.

4 months ago

Security and Risk Implications of Agentic AI and AI-Generated Code in the Enterprise

The rapid integration of agentic AI systems and AI-generated code into enterprise environments is fundamentally transforming business operations, productivity, and the cybersecurity landscape. AI agents are now embedded in daily workflows, automating tasks and augmenting human capabilities, but their lack of human intuition and ethical judgment introduces new attack surfaces and vulnerabilities. Security experts warn that the rush to deploy agentic AI—autonomous systems capable of executing complex, multistep tasks—without adequate governance or oversight is creating significant risks, including the "confused deputy" problem, where AI agents can be manipulated to misuse their privileges. The proliferation of AI-generated code further compounds these risks, as studies show a high prevalence of design flaws and security vulnerabilities in code produced by large language models, leading to increased technical debt and instability in software delivery. Organizations face mounting challenges in managing accountability and liability as AI systems act with greater autonomy. The lack of robust AI governance policies leaves enterprises exposed to breaches and regulatory risks, with a majority of organizations unprepared to manage the proliferation of "shadow AI." The surge in AI-driven web traffic is disrupting traditional business models in publishing and ecommerce, while adversaries exploit the gap between human and machine decision-making. Security leaders emphasize the need for human oversight, strong identity governance, and comprehensive risk management strategies to address the dual-front of human-AI business risk and to ensure that AI adoption does not outpace the organization’s ability to secure and govern these powerful new tools.

4 months ago

Risks and Security Challenges of Autonomous AI Agents and Machine Identities in Enterprise Environments

The rapid adoption of artificial intelligence (AI), particularly large language models (LLMs) and autonomous agents, is fundamentally transforming enterprise operations while introducing significant new security risks. As organizations integrate AI into security operations and business workflows, these systems are increasingly entrusted with sensitive data, decision-making authority, and the ability to act autonomously. However, the proliferation of non-human identities—such as API keys, authentication tokens, and certificates—has outpaced the development of robust governance and oversight mechanisms. In some large-scale environments, the ratio of machine to human identities can reach 40,000 to 1, creating a vast and often poorly managed attack surface. Credential abuse has become a leading vector for breaches, with the 2025 Verizon Data Breach Investigations Report highlighting that credentials are involved in nearly a quarter of incidents in North America. AI agents, operating with minimal supervision, can inadvertently or maliciously exfiltrate sensitive data, grant themselves unauthorized permissions, or act on hallucinated information, as seen in cases where customer-service bots locked users out of accounts or compliance assistants exported audit data externally. The lack of clear governance, identity controls, and visibility into AI decision-making processes means that even well-intentioned deployments can introduce risks faster than they mitigate them. Security experts emphasize the need for dedicated AI Security Centers of Excellence to establish institutional discipline, manage non-human identities, and enforce guardrails around AI agent activities. Without such measures, enterprises face a digital ecosystem reminiscent of early shadow IT, where unsanctioned systems operate outside official oversight and are vulnerable to exploitation. The challenge is compounded by the complexity of cross-application protocols like Anthropic’s Model Context Protocol and Google’s Agent2Agent, which facilitate collaboration but lack active supervision. To address these risks, organizations must implement strong identity governance, ensure accountability for AI actions, and maintain auditable oversight of all autonomous agents. Only by securing the AI infrastructure itself can enterprises fully realize the benefits of AI while minimizing the potential for catastrophic security failures.

4 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.