Agentic AI Adoption and Emerging Security Risks in AI Agents
Enterprises and public-sector organizations are accelerating adoption of AI agents and generative AI to automate knowledge work and software delivery, with guidance increasingly framed as a management and governance problem rather than a purely technical one. Commentary on agentic AI in software development describes agents as autonomous decision loops operating within guardrails (goal decomposition, tool selection, execution, observation, and iteration), enabled by mature CI/CD automation and API-driven infrastructure. Separate reporting highlights empirical findings that AI-generated code has grown to nearly 30% of code by late 2024 and is associated with an estimated ~4% productivity lift, with gains concentrated among more experienced developers despite higher usage among less-experienced staff.
Security and procurement implications are emerging alongside this adoption. Research on agentic tool chain attacks warns that AI agents’ “reasoning layer” and natural-language tool metadata become an attack surface, enabling techniques such as tool poisoning, tool shadowing, and “rugpull” behavior that can lead to covert data leakage or unauthorized actions; the risk is amplified when tools are centralized via architectures like the Model Context Protocol (MCP), where compromise of a shared tool server can propagate malicious behavior across many agents. In the US federal context, agencies are signaling demand for AI tools that deliver operational value while meeting requirements for security, transparency, and responsible use, and the General Services Administration is also tightening contractor cybersecurity expectations for work involving CUI by requiring alignment with NIST SP 800-171 (and select 800-172 controls), including MFA, encryption, vulnerability remediation, and removal of end-of-life components, with independent assessments as part of authorization and ongoing monitoring.
Sources
Related Stories
Security and Risk Implications of Agentic AI and AI-Generated Code in the Enterprise
The rapid integration of agentic AI systems and AI-generated code into enterprise environments is fundamentally transforming business operations, productivity, and the cybersecurity landscape. AI agents are now embedded in daily workflows, automating tasks and augmenting human capabilities, but their lack of human intuition and ethical judgment introduces new attack surfaces and vulnerabilities. Security experts warn that the rush to deploy agentic AI—autonomous systems capable of executing complex, multistep tasks—without adequate governance or oversight is creating significant risks, including the "confused deputy" problem, where AI agents can be manipulated to misuse their privileges. The proliferation of AI-generated code further compounds these risks, as studies show a high prevalence of design flaws and security vulnerabilities in code produced by large language models, leading to increased technical debt and instability in software delivery. Organizations face mounting challenges in managing accountability and liability as AI systems act with greater autonomy. The lack of robust AI governance policies leaves enterprises exposed to breaches and regulatory risks, with a majority of organizations unprepared to manage the proliferation of "shadow AI." The surge in AI-driven web traffic is disrupting traditional business models in publishing and ecommerce, while adversaries exploit the gap between human and machine decision-making. Security leaders emphasize the need for human oversight, strong identity governance, and comprehensive risk management strategies to address the dual-front of human-AI business risk and to ensure that AI adoption does not outpace the organization’s ability to secure and govern these powerful new tools.
4 months ago
Enterprise Security Risks From Agentic and Generative AI Deployments
Enterprises are rapidly integrating **agentic AI** assistants with high-privilege connections to ticketing systems, source code repositories, chat platforms, and cloud dashboards, enabling actions such as opening pull requests, querying internal databases, and triggering automated workflows with limited human oversight. Reporting citing Cisco’s *State of AI Security 2026* indicates many organizations are moving forward with these deployments despite low security readiness, expanding exposure across model interfaces, tool integrations, and the broader supply chain. Multiple sources highlight that attacker techniques against AI systems are maturing, particularly **prompt injection/jailbreaks** and multi-turn attacks that exploit session state, memory, and tool-calling to drive unsafe actions or data leakage. Separately, adversaries are using generative AI for **deepfake-enabled social engineering** (including video/voice impersonation to bypass identity verification and authorize sensitive actions) and for scalable brand impersonation via malicious ad campaigns; one widely cited example involved Arup, where a deepfake video call led to authorization of a fraudulent HK$200 million transfer. Overall, the material is primarily risk and threat reporting (not a single incident), emphasizing that AI systems’ contextual behavior and privileged integrations create new control gaps that traditional security testing and defenses may not detect.
3 weeks ago
Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery
Security leaders are warning that **AI agents are increasingly operating as “digital employees”** inside enterprise workflows—triaging alerts, coordinating investigations, and moving work across security tools—often with **broad permissions and limited governance**. The core risk highlighted is that organizations are deploying high-authority agents like plug-ins (reused service accounts, overbroad roles, weak oversight), creating fast-acting operators that can be manipulated and that lack the contextual judgment and policy awareness expected of human staff. Related commentary also raises concerns about **AI-to-AI communication** and “non-human-readable” behaviors that could reduce auditability and complicate investigations and control enforcement. In parallel, public examples show how quickly AI can accelerate **vulnerability discovery**: Microsoft Azure CTO Mark Russinovich reported using *Claude Opus 4.6* to decompile decades-old Apple II 6502 machine code and identify multiple issues, underscoring that similar techniques could be applied to **embedded/legacy firmware at scale**. Anthropic has also cautioned that advanced models can find high-severity flaws even in heavily tested codebases, reinforcing the likelihood that both defenders and attackers will leverage AI for faster bug-finding. Separate enterprise IT coverage notes that organizations are **reallocating budgets toward AI** by consolidating tools and renegotiating contracts, which can indirectly increase security exposure if cost-cutting reduces overlapping controls or if AI adoption outpaces governance and identity/access management maturity.
1 weeks ago