Agentic AI Expands Identity Attack Surface and Security Risks
Rubrik Zero Labs has released research highlighting how the rapid adoption of agentic AI is fundamentally altering the landscape of identity-driven cyber threats. The report, titled Identity Crisis: Understanding & Building Resilience Against Identity-Driven Threats, reveals that 89% of organizations have already integrated AI agents into their identity infrastructure, with non-human identities (NHIs) now outnumbering human users by a staggering 82 to 1. As organizations increasingly rely on these AI agents, the identity attack surface is expanding faster than most can secure it, creating a significant gap in cyber defense capabilities.
The research warns that more than half of all cyberattacks in the coming year are expected to be driven by agentic AI, as threat actors exploit trust and valid credentials rather than bypassing traditional network defenses. The dissolution of network boundaries due to cloud migration, remote work, and AI integration has made identity the primary attack vector. Rubrik emphasizes that securing NHIs is becoming as critical as protecting human identities, and organizations must adapt their security strategies to address this emerging threat landscape.
Sources
Related Stories
Agentic AI Adoption Accelerates Security Risks and Identity Gaps
The rapid integration of agentic AI and automated tools into enterprise environments is outpacing the ability of security teams to adapt, according to recent industry reports. Attackers are leveraging both automation and early forms of agentic AI to bypass traditional defenses, forcing organizations to increase investments in AI-powered security solutions. Despite these efforts, many enterprises continue to experience significant losses, with measurable improvements in defense remaining inconsistent. Security leaders are urged to focus on the broader business impact of these threats and to accelerate the training and upskilling of their teams to effectively manage and tune AI-driven security tools. A parallel trend is the proliferation of non-human identities (NHIs) as organizations adopt AI agents within their identity infrastructure. This expansion is creating new security gaps, with a majority of IT leaders expecting agentic AI to be responsible for a substantial portion of cyberattacks in the near future. As a result, there is a marked shift in identity and access management strategies, with many organizations changing IAM providers due to security concerns. Confidence in the ability to recover quickly from incidents is declining, highlighting the urgent need for more robust and adaptive security measures in the face of evolving AI-driven threats.
3 months ago
AI Agents and Non-Human Identities as Emerging Cybersecurity Risks
The rapid adoption of AI agents, bots, and other non-human identities (NHIs) is fundamentally reshaping the cybersecurity landscape, introducing new attack surfaces and operational challenges for enterprises. As organizations increasingly rely on automation and AI-driven processes, NHIs are being granted broad access to critical systems, often without the same oversight or security controls applied to human users. This shift has led to heightened risks such as over-permissioned accounts, static credentials, and insufficient monitoring, making NHIs attractive targets for cybercriminals seeking to exploit gaps in identity and access management (IAM). Security leaders are urged to implement zero-trust principles, least-privilege access, automated credential rotation, and robust secrets management to mitigate these risks and prevent privileged account compromise. The complexity of managing AI agents is further compounded by the need for effective governance and the challenge of balancing control with operational simplicity in security operations centers (SOCs). Experts emphasize that adversaries are increasingly "logging in, not breaking in," leveraging weaknesses in identity controls—especially those related to AI agents—to gain unauthorized access. The cybersecurity workforce must adapt, as AI-driven automation is expected to take over high-volume, repetitive tasks, requiring new skills in AI security and orchestration. Organizations are advised to treat every human, workload, and agent as a managed identity, enforce phishing-resistant multi-factor authentication, and continuously monitor for anomalous permission changes or session hijacking to stay ahead of evolving threats.
2 months ago
AI and Non-Human Identity Sprawl Expands IAM Attack Surface
Reporting and commentary warn that **AI-driven non-human identities (NHIs)** are rapidly increasing the number and turnover of credentials inside enterprise IAM programs, amplifying long-standing weaknesses such as credential sprawl, unclear ownership, and inconsistent lifecycle controls. The Cloud Security Alliance’s findings highlight that many organizations treat *AI identities* like traditional service accounts or API keys, causing them to inherit existing governance gaps while adding new scale and speed pressures as identities are created programmatically, distributed across environments, and used continuously. CSO Online describes the operational drivers behind the surge—microservices, Kubernetes auto-scaling, CI/CD pipelines (e.g., GitHub Actions), and infrastructure-as-code (e.g., Terraform) generating large volumes of short-lived tokens and service principals—then argues that **agentic AI** further accelerates risk because these identities may be authorized to execute commands, move data, and change configurations autonomously. The net risk emphasized is that over-privileged AI agents and other NHIs can create breach conditions that may not resemble traditional intrusion, instead appearing as “normal” automated activity due to excessive permissions and weak visibility into post-authentication behavior.
1 months ago