Skip to main content
Mallory
Mallory

AI Adoption and Risks in Enterprise Cybersecurity

Updated October 7, 2025 at 01:00 PM2 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

A global shift is underway as artificial intelligence becomes the top investment priority for cybersecurity among organizations, according to a recent PwC survey of nearly 4,000 business and technology executives. Sixty percent of respondents indicated that AI technologies are their primary focus for cybersecurity spending in the coming year, with key use cases including threat hunting, AI agents, event detection, and behavioral analysis. This surge in AI adoption is driven in part by ongoing skills shortages, with over half of organizations prioritizing AI and machine learning tools to close capability gaps. Additionally, 38% of companies are turning to managed service providers to access the necessary AI expertise. However, the rapid integration of AI into enterprise environments is not without significant risks. New research from LayerX reveals that AI has already become the leading uncontrolled channel for corporate data exfiltration, surpassing traditional risks like shadow SaaS and unmanaged file sharing. The report highlights that 45% of enterprise employees use generative AI tools, such as ChatGPT, Claude, and Copilot, with AI accounting for 11% of all enterprise application activity. Alarmingly, 67% of AI usage occurs through unmanaged personal accounts, leaving security teams with little visibility or control over sensitive data flows. The research found that 40% of files uploaded to generative AI platforms contain personally identifiable information (PII) or payment card information (PCI), and a significant portion of these uploads are conducted via personal accounts. Traditional data loss prevention (DLP) tools are often ineffective in this context, as they are not designed to monitor or control the new channels introduced by AI tools. Experts warn that the lack of governance and oversight around AI usage in enterprises creates substantial risks, including the potential for AI agents to access sensitive data beyond their intended scope if not properly managed. The PwC survey also notes that organizations are balancing investments between proactive technologies, such as monitoring and testing, and reactive measures like incident response and recovery. As AI becomes more deeply embedded in cybersecurity operations, the need for robust governance, least-privilege access for AI agents, and updated security controls is increasingly urgent. The evolving geopolitical landscape further complicates the risk environment, making it critical for organizations to understand both the opportunities and threats posed by AI in cybersecurity. The convergence of rapid AI adoption and insufficient controls underscores the importance of immediate action to secure enterprise data and workflows against emerging AI-driven threats.

Sources

Related Stories

Emerging Data Risks and Security Challenges from Enterprise AI Adoption

Enterprises are rapidly integrating artificial intelligence (AI) into their core operations, leading to a significant increase in both the scale and complexity of cybersecurity risks. Autonomous AI agents, once limited to providing suggestions, now act independently within enterprise systems, accessing sensitive data, executing transactions, and triggering downstream workflows without human oversight. These agents, often deployed by individual teams or embedded in third-party software, can inadvertently ingest confidential information, such as customer credit card data, even if the data is only briefly accessible. Unlike human users, AI agents lack contextual understanding and ethical judgment, acting continuously and at scale, which introduces a new category of 'Shadow AI' risk. Multimodal AI systems, which process multiple input streams to generate more human-like outputs, further expand the attack surface. Adversaries can exploit these systems by manipulating data inputs, such as subtly altering images or text, to deceive the AI and bypass security controls. Research has demonstrated that these attacks are not merely theoretical; adversarial manipulations can evade detection and cause significant harm, especially in critical sectors like defense, healthcare, and finance. Organizations are increasingly aware of the dangers posed by AI-augmented threats, including deepfakes and AI-driven social engineering, but many lag in implementing effective technical defenses. Surveys indicate that while a majority of firms have experienced deepfake or AI-voice fraud attempts, more than half have suffered financial losses as a result. Despite this, investment in detection and mitigation technologies remains inadequate, and many companies overestimate their preparedness. The surge in AI adoption is reflected in corporate disclosures, with over 70% of S&P 500 firms now reporting AI as a material risk, up from just 12% two years prior. Reputational and cybersecurity risks are the most frequently cited concerns, followed by legal and regulatory challenges as governments move to establish AI-specific compliance requirements. However, only a minority of corporate boards have formally integrated AI oversight into their governance structures, highlighting a gap between risk awareness and actionable governance. The lack of comprehensive frameworks for managing AI risk leaves organizations vulnerable to both technical and compliance failures. As AI becomes more deeply embedded in business processes, the need for robust governance, continuous education, and responsible-use frameworks becomes increasingly urgent. Security and governance leaders must adapt to this new frontier by developing strategies that address the unique risks posed by autonomous and multimodal AI systems. Failure to do so could result in significant financial, operational, and reputational damage as adversaries continue to exploit the evolving AI landscape.

5 months ago

AI Adoption in Enterprises Outpaces Security Governance and Data Protection

Organizations are rapidly integrating AI technologies into their operations, with studies showing a significant increase in AI adoption and the implementation of AI acceptable use policies. However, despite these advances, many companies struggle to effectively classify and protect data, and governance frameworks often lag behind the pace of AI deployment. Reports highlight that while 80% of organizations have established AI use policies, only a third feel confident in their data protection measures, and over half believe their data is not yet ready for AI. This gap between adoption and governance is further exacerbated by the acceleration of data growth driven by AI, with a notable rise in organizations managing petabyte-scale datasets. The lack of robust governance and holistic data management frameworks has led to increased risks, including the emergence of shadow identities and unmitigated security threats associated with AI tools. Experts emphasize the need for organizations to move beyond initial policy creation and embed comprehensive AI governance and dynamic data protection into their core operations. Without these measures, the benefits of AI could be undermined by vulnerabilities and operational blind spots, making it critical for security teams to proactively address these challenges as AI becomes ubiquitous in enterprise environments.

3 months ago

Security and Risk Implications of AI Tools in the Enterprise

Organizations are rapidly adopting artificial intelligence (AI) tools to enhance cybersecurity operations, streamline workflows, and improve productivity, but this trend introduces significant new risks and challenges. Reports indicate that cybersecurity professionals with AI security skills are in high demand, as companies seek to leverage AI for vulnerability management, threat detection, and automation of security tasks. The integration of AI into security teams’ arsenals is accelerating, with agentic AI tools becoming increasingly common for both defensive and operational purposes. However, the proliferation of AI-powered applications, such as AI notetakers in virtual meetings, raises concerns about data privacy, compliance, and the potential for sensitive information exposure. Many AI notetaking tools operate outside official enterprise systems, often lacking robust security controls such as SOC 2 certification, GDPR compliance, or strong encryption, making them vulnerable to data breaches and mishandling. The risk is compounded by the rapid spread of these tools within organizations, sometimes without proper vetting by legal, security, or procurement teams. Transcripts generated by these applications can be stored in third-party systems, increasing the risk of unauthorized access or legal discoverability. Security leaders are advised to develop clear policies and governance frameworks to manage the use of AI tools, ensuring that only approved applications with adequate security measures are deployed. The evolving landscape of AI in cybersecurity also includes increased merger and acquisition activity, as companies seek to acquire innovative AI security capabilities. Industry analysis highlights the need for continuous evaluation of AI models, such as DeepSeek, and the security implications of open-source agent frameworks like OpenAI’s AgentKit. The impact of AI-generated code on application security is another emerging concern, as automated code generation can introduce vulnerabilities if not properly reviewed. As AI becomes more embedded in business processes, organizations must balance the benefits of automation and efficiency with the imperative to safeguard sensitive data and maintain regulatory compliance. Security teams are encouraged to stay informed about the latest trends in AI security, invest in upskilling staff, and implement layered defenses to mitigate the unique risks posed by AI-driven tools. The convergence of AI and cybersecurity is reshaping the threat landscape, requiring proactive risk management and strategic investment in secure AI adoption.

5 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.