Skip to main content
Mallory
Mallory

Security Risks and Challenges of AI-Generated Code for Developers

Updated October 17, 2025 at 03:26 PM2 sources

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

The widespread adoption of generative AI (GenAI) tools in software development has significantly increased productivity, enabling developers to document, write, and optimize code at unprecedented speeds. According to a 2023 McKinsey study, organizations have rapidly integrated AI into their development workflows, with 83% using AI for code creation and 57% relying on AI-powered coding tools as a standard practice. However, this surge in AI-assisted coding has introduced new security risks, as traditional security models focused on perimeter or infrastructure controls do not adequately protect the data and code generated by these tools. Studies have revealed that nearly half of code snippets produced by popular AI models contain vulnerabilities, underscoring the prevalence of insecure code generation. High-profile incidents, such as Samsung's 2023 ban on ChatGPT following a sensitive code leak, highlight the real-world consequences of insufficient safeguards when using GenAI in development environments. The responsibility for securing data and code remains with developers, even as cloud providers secure the underlying infrastructure. The rapid pace of AI-generated code has outstripped the ability of traditional secure coding training to keep up, shifting the focus from training human programmers to ensuring that AI systems themselves are capable of secure coding. Industry experts note that AI is currently less effective at producing secure code than human programmers, with multiple studies and reports from sources like Schneier on Security, Veracode, and SC Media confirming this trend. The volume of vulnerabilities continues to rise, with over 47,000 publicly known vulnerabilities expected in a single year and at least 130 new vulnerabilities reported daily. This ongoing wave of vulnerabilities leads to constant exploitation and patching, further emphasizing the need for secure coding practices at the AI level. While AI has delivered substantial productivity gains—developers report 30% to 40% increases—these benefits are undermined by the security shortcomings of AI-generated code. The industry is now at a crossroads, where the imperative is to teach AI systems to code securely, rather than relying solely on human oversight or post-development security reviews. Integrating security into the AI coding process and providing developers with tools that embed data protection are seen as essential steps to address these emerging challenges. The shift towards AI-driven development necessitates a reevaluation of security strategies, focusing on proactive measures that align with the realities of modern software engineering. Without such changes, organizations risk exposing themselves to significant security threats stemming from the very tools designed to enhance their productivity.

Sources

October 16, 2025 at 12:00 AM
October 16, 2025 at 12:00 AM

Related Stories

Security Risks and Best Practices in the Adoption of AI Coding Assistants

Security Risks and Best Practices in the Adoption of AI Coding Assistants

The rapid adoption of AI coding assistants is fundamentally transforming software development practices across the technology industry. Major companies such as Coinbase, Accenture, Box, Duolingo, Meta, and Shopify have begun mandating the use of AI coding assistants for their engineering teams, with some executives even taking drastic measures such as terminating employees who resist upskilling in AI. This widespread shift is driven by the significant productivity gains that AI coding assistants offer, enabling developers to accelerate deployment and experiment with new approaches. However, the integration of these tools introduces substantial new security challenges, particularly in the context of software supply chain security. Security researchers warn that AI-generated code often relies on existing libraries and codebases, which may contain old, vulnerable, or low-quality software. As a result, vulnerabilities that have previously existed can be reintroduced into new projects, and new security issues may also arise due to the lack of context-specific considerations in AI-generated code. The phenomenon known as "vibe coding"—where developers quickly adapt AI-generated code without fully understanding its implications—further exacerbates these risks. AI models trained on insecure or outdated data can perpetuate flaws, making it difficult for human reviewers to catch every potential vulnerability. The attack surface for organizations expands significantly as AI coding assistants become integral to the development lifecycle, potentially increasing risk by an order of magnitude. Security practitioners emphasize the need for new secure coding strategies tailored to the era of AI-assisted development. Effective communication between security teams and developers is critical to ensure that AI tools are adopted safely and that their benefits do not come at the expense of security. Organizations must rethink their development lifecycles, incorporating rigorous review processes and updated security protocols to address the unique challenges posed by AI-generated code. The transition to AI-driven development is inevitable, but it requires a proactive approach to risk management. Security teams must lead the way in establishing best practices, fostering collaboration, and ensuring that the adoption of AI coding assistants enhances rather than undermines organizational security. The industry is at a pivotal moment where the balance between productivity and security must be carefully managed. As AI coding assistants become non-negotiable tools for developers, the responsibility falls on both security professionals and engineers to adapt and safeguard the software supply chain. The future of secure software development will depend on how effectively organizations can integrate AI tools while mitigating the associated risks.

2 months ago

Security Risks of AI-Generated Code in Enterprise Applications

The rapid adoption of AI-powered code generation tools such as GitHub Copilot, ChatGPT, and Amazon CodeWhisperer has fundamentally changed the software development landscape, introducing new security challenges for enterprise application security teams. Unlike traditional human-written code, AI-generated code often lacks clear provenance, making it difficult to verify its origin or ensure compliance with organizational security policies. This shift has led to the emergence of 'shadow code'—machine-generated code that may bypass standard security reviews and evade detection by traditional static and dynamic analysis tools, increasing the risk of invisible vulnerabilities in production systems. Generative AI models can introduce unique threats, including the creation of 'hallucinated' packages—references to non-existent or malicious libraries that may be inadvertently included in enterprise applications. Additionally, the language-based nature of large language models (LLMs) opens new attack surfaces, such as prompt injection and jailbreaking, where malicious inputs can manipulate model behavior or bypass safety constraints. As organizations accelerate the integration of AI into development workflows, application security programs must adapt to address these novel risks and build trust in the security of AI-powered software.

3 months ago

Security Risks and Remediation Challenges of AI-Generated Code and Agentic AI in Cybersecurity

The rapid adoption of agentic AI and AI-generated code is transforming cybersecurity operations, offering both significant opportunities and new risks. Security leaders and CISOs are increasingly leveraging agentic AI for autonomous threat detection and response, as highlighted by industry experts from organizations like Dell Technologies and Zoom. However, the proliferation of AI-generated code in enterprise environments has introduced complex security challenges, with studies showing that critical vulnerabilities can increase as AI-generated code is refined, and remediation of such code often takes significantly longer than for human-written code. The financial impact of breaches involving AI-generated logic is substantial, with incidents costing millions and compliance fines mounting due to unpatched flaws. Traditional application security tools are struggling to keep pace with the unique risks posed by AI-generated code, which often lacks clear human intent and context. Security teams face delays in remediation due to misalignment with engineering, as reported in industry surveys, leading to prolonged exposure and increased risk. The need for new control layers, such as agentic remediation, is becoming evident to govern and secure AI-written code at scale. As AI continues to accelerate both the sophistication and volume of cyber threats, organizations must balance the productivity gains of AI with the heightened risk and complexity it introduces to their security posture.

3 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.