Skip to main content
Mallory
Mallory

Security Risks from Unmanaged AI and Citizen Developer Automation in Enterprises

Shadow AIautomationcitizen developersecurity oversightrisk assessmentvulnerabilitiesAIunauthorized accessmedium enterpriseapplication surgemonitoring solutionslow-codesmall enterpriseenterpriseproactive measures
Updated November 15, 2025 at 12:04 AM6 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

The rapid adoption of AI tools and no-code/low-code platforms by business users, often referred to as 'citizen developers,' is creating significant blind spots in enterprise security. Organizations are seeing a surge in applications and automations built outside traditional IT oversight, leading to vulnerabilities such as hardcoded credentials, injection attacks, and unauthorized data access. Security teams are struggling to maintain visibility and control, as the number of these shadow applications can far exceed those developed by IT professionals. This trend is compounded by the widespread use of unapproved AI tools—so-called 'Shadow AI'—with studies showing that over 80% of employees have used such tools, and regular use is highest among executives. The lack of clear corporate AI policies and the confidence of employees in their own risk assessments further exacerbate the problem, increasing the risk of data exposure and compliance failures.

Industry reports and expert commentary highlight that as AI and automation become standard in business operations, the attack surface for cybercriminals expands. Small and medium enterprises are particularly vulnerable, with a notable percentage experiencing financial or operational losses due to cyber incidents. Security experts recommend that organizations respond by automating security oversight, updating policies, and providing actionable guidance to users. Proactive measures, such as implementing frameworks like the Essential Eight and deploying monitoring solutions, are essential to mitigate the risks associated with the democratization of development and the proliferation of unsanctioned AI tools in the workplace.

Sources

November 14, 2025 at 12:00 AM
November 14, 2025 at 12:00 AM

1 more from sources like scworld

Related Stories

Shadow AI and the Risks of Unapproved AI Tool Adoption in Enterprises

Organizations are facing a growing challenge as employees increasingly adopt AI tools and agents without formal IT approval, a phenomenon known as shadow AI. This unsanctioned use of AI—ranging from chatbots and large language models to low-code agents—enables employees to automate workflows and make decisions outside traditional governance structures. The lack of oversight and visibility into these autonomous systems exposes enterprises to significant risks, as sensitive data may be processed or shared through unvetted platforms, and decisions may be influenced by tools that operate beyond established security controls. Recent research highlights that 73% of employees use AI for work, yet over a third do not consistently follow company policies, and many are unaware of existing guidelines. About 27% admit to using unapproved AI tools, often browser-based and free, making them difficult for IT to monitor. This shadow AI trend compounds the broader issue of shadow IT and SaaS sprawl, where employees bypass official channels to access tools that better meet their needs. Security teams are advised to shift from outright bans to strategies focused on discovery, communication, and oversight to manage these risks effectively.

4 months ago
Emerging Security Risks from AI Agents and Identity Management Failures

Emerging Security Risks from AI Agents and Identity Management Failures

Organizations are facing a new wave of security challenges as internally built no-code applications and AI agents proliferate across enterprise environments. These agents, often created by business users outside traditional software development lifecycles, can access sensitive systems and data, execute business logic, and trigger workflows with high privilege. Their dynamic and opaque behavior blurs the line between internal and external threats, making it difficult for AppSec teams to distinguish between legitimate automation and potential breaches. Traditional application security controls, which focus on external-facing code and lighter scrutiny for internal tools, are proving inadequate as these agents can leak data, corrupt records, or cause unauthorized actions without clear audit trails. Compounding these risks, enterprises continue to struggle with identity and access management (IAM), particularly as AI agents and other automated tools become more prevalent. Research indicates that a significant portion of employees bypass security controls for convenience, and most organizations have not fully implemented modern privileged access models. Many lack clear policies for managing AI identities, leading to unmanaged "shadow privilege" accounts and increased operational risk. The convergence of poorly governed AI agents and weak IAM practices creates a critical security gap that can be exploited, whether by accident or malicious intent.

2 months ago

AI-Driven Software Development and Security Risks in the Enterprise

Organizations are rapidly integrating AI into software development pipelines, with AI-generated code now present in every surveyed environment and a significant portion of codebases produced by AI tools. Security leaders report increased risk due to limited visibility into where and how AI is used, the proliferation of shadow AI, and the introduction of logic flaws or insecure patterns by autonomous agents. The lack of oversight and formal controls over AI-generated code and tools has expanded the attack surface, making product security and supply chain integrity top priorities for 2026. Industry experts emphasize the need for responsible adoption of AI-driven security tools, highlighting the importance of evaluation, deployment, and governance to maintain control and transparency. New frameworks, such as the AI Vulnerability Scoring System (AIVSS), are being developed to address the unique, non-deterministic risks posed by agentic and autonomous AI systems, which traditional models like CVSS cannot adequately capture. The shift to runtime application security and the management of non-human identities further underscore the evolving landscape, as organizations seek to balance innovation with robust security practices.

4 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.