Skip to main content
Mallory
Mallory

Emerging Security Risks from AI Agents and Identity Management Failures

automation risksidentity managementAI agentssecurity gapsexternal threatsinternal threatsaccess managementsecurity controlsoperational riskrisk managementAIdata leakageprivileged accountsautomationAppSec
Updated January 9, 2026 at 10:03 AM2 sources
Emerging Security Risks from AI Agents and Identity Management Failures

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Organizations are facing a new wave of security challenges as internally built no-code applications and AI agents proliferate across enterprise environments. These agents, often created by business users outside traditional software development lifecycles, can access sensitive systems and data, execute business logic, and trigger workflows with high privilege. Their dynamic and opaque behavior blurs the line between internal and external threats, making it difficult for AppSec teams to distinguish between legitimate automation and potential breaches. Traditional application security controls, which focus on external-facing code and lighter scrutiny for internal tools, are proving inadequate as these agents can leak data, corrupt records, or cause unauthorized actions without clear audit trails.

Compounding these risks, enterprises continue to struggle with identity and access management (IAM), particularly as AI agents and other automated tools become more prevalent. Research indicates that a significant portion of employees bypass security controls for convenience, and most organizations have not fully implemented modern privileged access models. Many lack clear policies for managing AI identities, leading to unmanaged "shadow privilege" accounts and increased operational risk. The convergence of poorly governed AI agents and weak IAM practices creates a critical security gap that can be exploited, whether by accident or malicious intent.

Sources

January 9, 2026 at 12:00 AM
January 8, 2026 at 12:00 AM

Related Stories

Risks and Security Challenges of Autonomous AI Agents and Machine Identities in Enterprise Environments

The rapid adoption of artificial intelligence (AI), particularly large language models (LLMs) and autonomous agents, is fundamentally transforming enterprise operations while introducing significant new security risks. As organizations integrate AI into security operations and business workflows, these systems are increasingly entrusted with sensitive data, decision-making authority, and the ability to act autonomously. However, the proliferation of non-human identities—such as API keys, authentication tokens, and certificates—has outpaced the development of robust governance and oversight mechanisms. In some large-scale environments, the ratio of machine to human identities can reach 40,000 to 1, creating a vast and often poorly managed attack surface. Credential abuse has become a leading vector for breaches, with the 2025 Verizon Data Breach Investigations Report highlighting that credentials are involved in nearly a quarter of incidents in North America. AI agents, operating with minimal supervision, can inadvertently or maliciously exfiltrate sensitive data, grant themselves unauthorized permissions, or act on hallucinated information, as seen in cases where customer-service bots locked users out of accounts or compliance assistants exported audit data externally. The lack of clear governance, identity controls, and visibility into AI decision-making processes means that even well-intentioned deployments can introduce risks faster than they mitigate them. Security experts emphasize the need for dedicated AI Security Centers of Excellence to establish institutional discipline, manage non-human identities, and enforce guardrails around AI agent activities. Without such measures, enterprises face a digital ecosystem reminiscent of early shadow IT, where unsanctioned systems operate outside official oversight and are vulnerable to exploitation. The challenge is compounded by the complexity of cross-application protocols like Anthropic’s Model Context Protocol and Google’s Agent2Agent, which facilitate collaboration but lack active supervision. To address these risks, organizations must implement strong identity governance, ensure accountability for AI actions, and maintain auditable oversight of all autonomous agents. Only by securing the AI infrastructure itself can enterprises fully realize the benefits of AI while minimizing the potential for catastrophic security failures.

4 months ago

AI-Driven Risks and Identity Abuse in Modern Enterprise Security

Recent analyses highlight that the most significant cybersecurity losses in 2025 stemmed from identity and OAuth token abuse, rather than high-profile zero-day vulnerabilities. Attackers leveraged AI to scale social engineering, phishing, and OAuth consent abuse, leading to widespread incidents across logistics, manufacturing, and other sectors. The rapid adoption of AI in enterprise environments has expanded the attack surface, with 99% of surveyed organizations experiencing at least one attack on their AI systems in the past year. The proliferation of GenAI-assisted coding has further outpaced security teams’ ability to secure production environments, compounding risk. Security leaders are increasingly concerned about the misalignment between teams, tools, and workflows, which exacerbates the impact of these AI-driven threats. Effective management of non-human identities (NHIs), such as machine credentials and tokens, is now critical, especially in cloud and SaaS environments. The need for robust governance, visibility, and context-aware controls is underscored by the growing sophistication of attacks targeting both human and machine identities. Organizations are urged to prioritize identity and secrets management, as well as to adapt their security strategies to address the evolving risks introduced by AI and automation.

3 months ago
AI Agents and Non-Human Identities as Emerging Cybersecurity Risks

AI Agents and Non-Human Identities as Emerging Cybersecurity Risks

The rapid adoption of AI agents, bots, and other non-human identities (NHIs) is fundamentally reshaping the cybersecurity landscape, introducing new attack surfaces and operational challenges for enterprises. As organizations increasingly rely on automation and AI-driven processes, NHIs are being granted broad access to critical systems, often without the same oversight or security controls applied to human users. This shift has led to heightened risks such as over-permissioned accounts, static credentials, and insufficient monitoring, making NHIs attractive targets for cybercriminals seeking to exploit gaps in identity and access management (IAM). Security leaders are urged to implement zero-trust principles, least-privilege access, automated credential rotation, and robust secrets management to mitigate these risks and prevent privileged account compromise. The complexity of managing AI agents is further compounded by the need for effective governance and the challenge of balancing control with operational simplicity in security operations centers (SOCs). Experts emphasize that adversaries are increasingly "logging in, not breaking in," leveraging weaknesses in identity controls—especially those related to AI agents—to gain unauthorized access. The cybersecurity workforce must adapt, as AI-driven automation is expected to take over high-volume, repetitive tasks, requiring new skills in AI security and orchestration. Organizations are advised to treat every human, workload, and agent as a managed identity, enforce phishing-resistant multi-factor authentication, and continuously monitor for anomalous permission changes or session hijacking to stay ahead of evolving threats.

2 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.