Skip to main content
Mallory
Mallory

Emerging Data Risks and Security Challenges from Enterprise AI Adoption

Updated October 10, 2025 at 05:00 PM4 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Enterprises are rapidly integrating artificial intelligence (AI) into their core operations, leading to a significant increase in both the scale and complexity of cybersecurity risks. Autonomous AI agents, once limited to providing suggestions, now act independently within enterprise systems, accessing sensitive data, executing transactions, and triggering downstream workflows without human oversight. These agents, often deployed by individual teams or embedded in third-party software, can inadvertently ingest confidential information, such as customer credit card data, even if the data is only briefly accessible. Unlike human users, AI agents lack contextual understanding and ethical judgment, acting continuously and at scale, which introduces a new category of 'Shadow AI' risk. Multimodal AI systems, which process multiple input streams to generate more human-like outputs, further expand the attack surface. Adversaries can exploit these systems by manipulating data inputs, such as subtly altering images or text, to deceive the AI and bypass security controls. Research has demonstrated that these attacks are not merely theoretical; adversarial manipulations can evade detection and cause significant harm, especially in critical sectors like defense, healthcare, and finance. Organizations are increasingly aware of the dangers posed by AI-augmented threats, including deepfakes and AI-driven social engineering, but many lag in implementing effective technical defenses. Surveys indicate that while a majority of firms have experienced deepfake or AI-voice fraud attempts, more than half have suffered financial losses as a result. Despite this, investment in detection and mitigation technologies remains inadequate, and many companies overestimate their preparedness. The surge in AI adoption is reflected in corporate disclosures, with over 70% of S&P 500 firms now reporting AI as a material risk, up from just 12% two years prior. Reputational and cybersecurity risks are the most frequently cited concerns, followed by legal and regulatory challenges as governments move to establish AI-specific compliance requirements. However, only a minority of corporate boards have formally integrated AI oversight into their governance structures, highlighting a gap between risk awareness and actionable governance. The lack of comprehensive frameworks for managing AI risk leaves organizations vulnerable to both technical and compliance failures. As AI becomes more deeply embedded in business processes, the need for robust governance, continuous education, and responsible-use frameworks becomes increasingly urgent. Security and governance leaders must adapt to this new frontier by developing strategies that address the unique risks posed by autonomous and multimodal AI systems. Failure to do so could result in significant financial, operational, and reputational damage as adversaries continue to exploit the evolving AI landscape.

Sources

October 10, 2025 at 12:00 AM
October 10, 2025 at 12:00 AM

Related Stories

AI Adoption and Risks in Enterprise Cybersecurity

A global shift is underway as artificial intelligence becomes the top investment priority for cybersecurity among organizations, according to a recent PwC survey of nearly 4,000 business and technology executives. Sixty percent of respondents indicated that AI technologies are their primary focus for cybersecurity spending in the coming year, with key use cases including threat hunting, AI agents, event detection, and behavioral analysis. This surge in AI adoption is driven in part by ongoing skills shortages, with over half of organizations prioritizing AI and machine learning tools to close capability gaps. Additionally, 38% of companies are turning to managed service providers to access the necessary AI expertise. However, the rapid integration of AI into enterprise environments is not without significant risks. New research from LayerX reveals that AI has already become the leading uncontrolled channel for corporate data exfiltration, surpassing traditional risks like shadow SaaS and unmanaged file sharing. The report highlights that 45% of enterprise employees use generative AI tools, such as ChatGPT, Claude, and Copilot, with AI accounting for 11% of all enterprise application activity. Alarmingly, 67% of AI usage occurs through unmanaged personal accounts, leaving security teams with little visibility or control over sensitive data flows. The research found that 40% of files uploaded to generative AI platforms contain personally identifiable information (PII) or payment card information (PCI), and a significant portion of these uploads are conducted via personal accounts. Traditional data loss prevention (DLP) tools are often ineffective in this context, as they are not designed to monitor or control the new channels introduced by AI tools. Experts warn that the lack of governance and oversight around AI usage in enterprises creates substantial risks, including the potential for AI agents to access sensitive data beyond their intended scope if not properly managed. The PwC survey also notes that organizations are balancing investments between proactive technologies, such as monitoring and testing, and reactive measures like incident response and recovery. As AI becomes more deeply embedded in cybersecurity operations, the need for robust governance, least-privilege access for AI agents, and updated security controls is increasingly urgent. The evolving geopolitical landscape further complicates the risk environment, making it critical for organizations to understand both the opportunities and threats posed by AI in cybersecurity. The convergence of rapid AI adoption and insufficient controls underscores the importance of immediate action to secure enterprise data and workflows against emerging AI-driven threats.

5 months ago

Security and Risk Implications of Agentic AI and AI-Generated Code in the Enterprise

The rapid integration of agentic AI systems and AI-generated code into enterprise environments is fundamentally transforming business operations, productivity, and the cybersecurity landscape. AI agents are now embedded in daily workflows, automating tasks and augmenting human capabilities, but their lack of human intuition and ethical judgment introduces new attack surfaces and vulnerabilities. Security experts warn that the rush to deploy agentic AI—autonomous systems capable of executing complex, multistep tasks—without adequate governance or oversight is creating significant risks, including the "confused deputy" problem, where AI agents can be manipulated to misuse their privileges. The proliferation of AI-generated code further compounds these risks, as studies show a high prevalence of design flaws and security vulnerabilities in code produced by large language models, leading to increased technical debt and instability in software delivery. Organizations face mounting challenges in managing accountability and liability as AI systems act with greater autonomy. The lack of robust AI governance policies leaves enterprises exposed to breaches and regulatory risks, with a majority of organizations unprepared to manage the proliferation of "shadow AI." The surge in AI-driven web traffic is disrupting traditional business models in publishing and ecommerce, while adversaries exploit the gap between human and machine decision-making. Security leaders emphasize the need for human oversight, strong identity governance, and comprehensive risk management strategies to address the dual-front of human-AI business risk and to ensure that AI adoption does not outpace the organization’s ability to secure and govern these powerful new tools.

4 months ago

AI Governance and Security Challenges in Enterprise Environments

Enterprises are facing a critical inflection point as artificial intelligence becomes deeply embedded across organizational layers, fundamentally altering cyber risk and security postures. Research from industry leaders and the Cloud Security Alliance highlights that mature governance frameworks are now the primary differentiator for organizations confident in their ability to secure AI systems. As AI agents and machine identities proliferate, traditional identity and access management models are proving inadequate, with identity emerging as the new control plane for managing AI risk. The rapid adoption of AI, often without sufficient oversight, is creating new blind spots, expanding attack surfaces, and introducing risks such as shadow AI, where unsanctioned tools and agents operate outside established security controls. Security teams are increasingly involved in AI adoption, leveraging AI for detection, investigation, and response, but the lack of comprehensive governance and workforce training remains a significant barrier. The convergence of AI with other technologies, such as blockchain and cryptocurrency, is also driving the emergence of autonomous financial systems and agentic payments, further complicating the security landscape. Success in this new paradigm requires balancing innovation with robust accountability, ensuring that AI-driven systems are auditable and governed rather than left to unconstrained automation. As organizations move from experimentation to operational deployment of AI, the need for continuous, data-aware identity security and formal governance policies is paramount to mitigate risks, ensure compliance, and maintain confidence in AI-enabled operations.

2 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.