Skip to main content
Mallory
Mallory

Shadow AI and the Risks of Unapproved AI Tool Adoption in Enterprises

shadow AIenterprise risksunapproved toolsemployee adoptionIT securitySaaS sprawlautomationunsanctioned usemonitoring challengesemployee awarenessIT governancesecurity controlslow-code agentschatbotsrisk management
Updated November 6, 2025 at 06:05 AM3 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Organizations are facing a growing challenge as employees increasingly adopt AI tools and agents without formal IT approval, a phenomenon known as shadow AI. This unsanctioned use of AI—ranging from chatbots and large language models to low-code agents—enables employees to automate workflows and make decisions outside traditional governance structures. The lack of oversight and visibility into these autonomous systems exposes enterprises to significant risks, as sensitive data may be processed or shared through unvetted platforms, and decisions may be influenced by tools that operate beyond established security controls.

Recent research highlights that 73% of employees use AI for work, yet over a third do not consistently follow company policies, and many are unaware of existing guidelines. About 27% admit to using unapproved AI tools, often browser-based and free, making them difficult for IT to monitor. This shadow AI trend compounds the broader issue of shadow IT and SaaS sprawl, where employees bypass official channels to access tools that better meet their needs. Security teams are advised to shift from outright bans to strategies focused on discovery, communication, and oversight to manage these risks effectively.

Sources

November 6, 2025 at 12:00 AM
November 4, 2025 at 12:00 AM
November 3, 2025 at 12:00 AM

Related Stories

Risks and Security Challenges of Shadow AI Agents in Enterprise Environments

Organizations are rapidly adopting AI-powered tools and agents across business processes, often without adequate oversight or security controls. As AI agents become more autonomous, they are increasingly granted access to sensitive systems, data, and workflows, sometimes without formal approval or visibility from IT and security teams. This phenomenon, known as 'Shadow AI,' introduces significant blind spots for traditional security tools, as these agents can operate with hidden identities and privileges. Studies have shown that a large proportion of enterprise employees use generative AI tools like ChatGPT, frequently pasting sensitive information such as personally identifiable information (PII) and payment card data into these platforms, often through unmanaged personal accounts. This uncontrolled usage creates substantial risks of data leakage, compliance violations, and potential misuse of corporate data for AI model training. Security research highlights that 45 percent of enterprise employees use generative AI tools, with 77 percent of those users copying and pasting data into chatbots, and 22 percent of those pastes containing PII or PCI data. Furthermore, 40 percent of file uploads to generative AI sites include sensitive data, with a significant portion coming from non-corporate accounts, making it difficult for organizations to monitor or control data exfiltration. The rise of autonomous AI agents, capable of acting independently and integrating with APIs and workflows, further complicates the security landscape, as these agents can trigger actions and access data without direct human oversight. Industry experts warn that unchecked proliferation of AI agents could lead to disastrous consequences, including unauthorized access to business processes and sensitive information. The OpenID Foundation and other organizations are calling for the development of AI-specific identity and access management standards to address these risks. Ethical considerations are also paramount, as the design and deployment of AI agents must prioritize principles such as transparency, accountability, and alignment with human values to prevent costly errors and security incidents. Security leaders are urged to extend governance practices to cover AI agents, implement robust monitoring and access controls, and foster a culture of cybersecurity awareness to mitigate the risks posed by shadow AI. The convergence of technical, regulatory, and ethical challenges underscores the urgent need for coordinated action to secure the expanding ecosystem of AI agents within enterprises.

5 months ago

Security Risks from Unmanaged AI and Citizen Developer Automation in Enterprises

The rapid adoption of AI tools and no-code/low-code platforms by business users, often referred to as 'citizen developers,' is creating significant blind spots in enterprise security. Organizations are seeing a surge in applications and automations built outside traditional IT oversight, leading to vulnerabilities such as hardcoded credentials, injection attacks, and unauthorized data access. Security teams are struggling to maintain visibility and control, as the number of these shadow applications can far exceed those developed by IT professionals. This trend is compounded by the widespread use of unapproved AI tools—so-called 'Shadow AI'—with studies showing that over 80% of employees have used such tools, and regular use is highest among executives. The lack of clear corporate AI policies and the confidence of employees in their own risk assessments further exacerbate the problem, increasing the risk of data exposure and compliance failures. Industry reports and expert commentary highlight that as AI and automation become standard in business operations, the attack surface for cybercriminals expands. Small and medium enterprises are particularly vulnerable, with a notable percentage experiencing financial or operational losses due to cyber incidents. Security experts recommend that organizations respond by automating security oversight, updating policies, and providing actionable guidance to users. Proactive measures, such as implementing frameworks like the Essential Eight and deploying monitoring solutions, are essential to mitigate the risks associated with the democratization of development and the proliferation of unsanctioned AI tools in the workplace.

4 months ago
Enterprise Risk From Unsanctioned and Over-Permissive AI Tooling

Enterprise Risk From Unsanctioned and Over-Permissive AI Tooling

Security leaders are warning that rapid adoption of AI tools—often outside formal governance—creates expanding blind spots and increases the likelihood of **data leakage** and operational incidents. A webcast discussion framed “**Shadow AIT**” as the AI-era evolution of shadow IT, highlighting that AI capabilities are frequently embedded in everyday SaaS features and browser extensions, making it difficult for organizations to accurately inventory where AI is in use and what data is being shared. The panel cited a cautionary example involving *Replit* where insufficient controls around an AI agent reportedly contributed to a production database deletion, underscoring that agentic workflows can translate governance gaps into real outages. Separately, reporting on *Google Vertex AI* raised concerns that **permissions and access control design** in AI platforms can amplify **insider-risk** scenarios if roles, entitlements, and auditability are not tightly managed—particularly where AI services can access or act on sensitive datasets. Commentary-style content also broadly discusses “cognitive AI” and future-facing architectures, but without tying to a specific incident or disclosure; the actionable takeaway across the relevant items is to treat AI enablement as an identity, data-governance, and monitoring problem (inventory AI usage, constrain permissions, and instrument logging) rather than a purely productivity tooling decision.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.