Skip to main content
Mallory
Mallory

AI Agent Adoption Outpacing Safety and Governance Controls

ai agentsresponsible aiagentsintrusion lifecyclesafetyoversightgovernancelaw enforcementpolicycontrolsresponse windowsaiautonomousbias
Updated January 23, 2026 at 08:00 AM4 sources
AI Agent Adoption Outpacing Safety and Governance Controls

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Organizations are rapidly expanding the use of AI agents—systems that can execute multi-step tasks with limited human supervision—while governance, safety, and oversight controls lag behind. Deloitte’s State of AI in the Enterprise survey of 3,200+ business leaders across 24 countries reported 23% of companies already using AI agents “at least moderately,” projected to rise to 74% within two years, while only about 21% said they have robust safety and oversight mechanisms in place. Separately, commentary warning about AI-enabled intrusion acceleration cited a purported “GTG-1002” campaign in which AI agents allegedly automated most of the intrusion lifecycle and compressed response windows, arguing that traditional SOC processes struggle against autonomous, high-velocity adversary tradecraft.

Multiple other items in the set focus on broader responsible AI and policy concerns rather than a single security incident: an interview-style piece describes how “responsible AI” functions inside a large vendor’s product process, and another report highlights expert concerns about deploying LLM tools in law enforcement workflows (e.g., summarizing body camera transcripts or generating crime scene photo descriptions) given risks like hallucinations and bias. A separate business-leadership article frames cybersecurity and AI as strategic imperatives amid geopolitical instability but does not provide incident-specific or vulnerability-specific details. Overall, the material is best characterized as governance and risk posture coverage around agentic AI rather than a unified, verifiable breach or vulnerability disclosure.

Related Stories

AI Adoption and Governance Concerns Amid Emerging Agentic-AI Security Risks

AI Adoption and Governance Concerns Amid Emerging Agentic-AI Security Risks

Organizations are accelerating adoption of **generative and agentic AI**, but reporting indicates governance, data readiness, and workforce skills are lagging. A survey of chief data officers cited widespread use of genAI in large enterprises and growing plans to increase **data management** investment, while also flagging that visibility and governance have not kept pace with expanding AI usage and that many employees need upskilling in **data** and **AI literacy** to use AI outputs responsibly. Separately, commentary and reporting highlighted a widening set of AI-related security and societal risks, including concerns about **deepfakes**, privacy, and opaque model behavior, alongside claims of real-world exploitation activity targeting AI-adjacent developer workflows (for example, token theft via compromised automation such as GitHub Actions) and discussion of vulnerabilities affecting AI tooling and agent communication patterns. Other items in the set were primarily newsletter/personal updates or vendor-style announcements and did not provide a single, verifiable incident narrative beyond general AI-and-security trend coverage.

1 weeks ago
AI Adoption and Misuse Expands Enterprise and Cybercrime Risk

AI Adoption and Misuse Expands Enterprise and Cybercrime Risk

No single incident ties the reporting together; the dominant theme is **AI’s expanding role in both enterprise operations and criminal tradecraft**, alongside broader, non-AI security trend commentary. A Docker-sponsored survey reported by *Help Net Security* says **60% of organizations run AI agents in production**, but **security/compliance is the top scaling barrier (40%)**, with recurring concerns including *prompt injection*, *tool poisoning*, runtime isolation/sandboxing, auditability, and credential/access control in distributed agent systems. Separately, forum-traffic research summarized by *Help Net Security* found cybercriminals increasingly using mainstream and local AI models to support phishing, code generation, and social engineering, with frequent discussion of jailbreaking and the use of stolen/resold premium AI accounts. Several other items are adjacent but not about the same specific story: an ESET article provides **generic guidance** on detecting **AI voice deepfakes** used for fraud; an Ars Technica piece covers **copyright/data memorization** risks in LLMs; and multiple outlets publish broader security trend or opinion content (quantum preparedness, ransomware targeting manufacturing, Romanian warnings about ransomware aligning with Russian hybrid aims, ATM jackpotting increases, and a Check Point retrospective). Some entries are primarily **commentary, historical analogy, newsletters, or how-to recon guidance** rather than new threat reporting, and should be treated as lower-signal for executive situational awareness unless your organization is actively deploying agentic AI or tracking AI-enabled fraud/social engineering.

3 weeks ago
AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure

AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure

Enterprises’ rapid deployment of **AI and agentic AI** is increasingly creating measurable security and business risk, including direct exposure of sensitive personal data and downstream impacts on risk transfer. A widely cited example involved McDonald’s *McHire* applicant-screening platform (built by *Paradox.ai*), where researchers reported a trivial backend credential weakness (`123456` as both username and password) and no MFA, potentially exposing data tied to roughly **64 million** applicants; the incident is being used by insurers and risk teams as evidence that AI adoption is moving faster than security and governance, contributing to tighter cyber-insurance language, higher premiums, and **AI-related exclusions**. Separate reporting also highlighted that “plug-and-play” AI is unrealistic at enterprise scale, with organizations increasingly needing custom integration and operational ownership rather than relying on off-the-shelf tools. Threat reporting during the same period reinforced that AI is expanding both attacker capability and the attack surface: researchers described **Pakistan-linked APT36** using AI coding tools to generate high volumes of low-quality malware variants (including in less common languages) and to leverage legitimate cloud services for command-and-control, complicating detection. Additional research flagged **AI-themed browser extensions** (Chrome/Edge) that impersonate legitimate tools and can harvest LLM chat histories and browsing activity, underscoring the risk of “shadow AI” and unvetted add-ons. In parallel, routine threat-intelligence summaries continued to track major incidents (e.g., ransomware and data breaches) alongside AI-enabled tactics, indicating that AI risk is becoming intertwined with broader enterprise security exposure rather than remaining a standalone technology concern.

5 days ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.