Skip to main content
Mallory
Mallory

Research and commentary warn autonomous AI agents are increasing security and financial crime risk

autonomous agentsagentic aifinancial crimerisk disclosurefraud detectionai governanceai scamsillicit cryptoliabilitycryptocurrencysafety protocolsemail automationcross-chainmonitoring architecturepc hijacking
Updated March 2, 2026 at 05:00 AM4 sources
Research and commentary warn autonomous AI agents are increasing security and financial crime risk

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

Reporting on a new MIT-led survey of 30 widely used agentic AI systems describes a security posture marked by limited risk disclosure, weak transparency, and inconsistent safety protocols, with researchers warning it is difficult to enumerate failure modes when developers do not document capabilities and controls. The coverage also points to recent attention around the open-source agent framework OpenClaw, citing reported security flaws that could enable PC hijacking when agents are granted broad permissions (e.g., to operate email and other user workflows), and includes vendor responses from Perplexity, OpenAI, and IBM.

Separate industry analysis highlights how increasingly autonomous agents—especially those able to initiate transactions—compress detection windows for abuse and complicate attribution and liability, particularly in crypto and cross-chain contexts where funds can move in seconds. A vendor blog argues that accountability still ultimately rests with the humans who design, deploy, authorize, or benefit from these systems, and that governance/monitoring architecture may become central evidence in enforcement actions; it also claims 2025 illicit crypto volume reached $158B and that AI-enabled scams rose sharply year over year. Broader software-engineering commentary reinforces the trend toward AI-native development and widespread use of AI coding tools, but is largely directional and does not add specific incident or vulnerability detail beyond the general risk discussion.

Related Stories

Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery

Security Risks and Offensive Potential of Agentic AI and Automated Vulnerability Discovery

Security leaders are warning that **AI agents are increasingly operating as “digital employees”** inside enterprise workflows—triaging alerts, coordinating investigations, and moving work across security tools—often with **broad permissions and limited governance**. The core risk highlighted is that organizations are deploying high-authority agents like plug-ins (reused service accounts, overbroad roles, weak oversight), creating fast-acting operators that can be manipulated and that lack the contextual judgment and policy awareness expected of human staff. Related commentary also raises concerns about **AI-to-AI communication** and “non-human-readable” behaviors that could reduce auditability and complicate investigations and control enforcement. In parallel, public examples show how quickly AI can accelerate **vulnerability discovery**: Microsoft Azure CTO Mark Russinovich reported using *Claude Opus 4.6* to decompile decades-old Apple II 6502 machine code and identify multiple issues, underscoring that similar techniques could be applied to **embedded/legacy firmware at scale**. Anthropic has also cautioned that advanced models can find high-severity flaws even in heavily tested codebases, reinforcing the likelihood that both defenders and attackers will leverage AI for faster bug-finding. Separate enterprise IT coverage notes that organizations are **reallocating budgets toward AI** by consolidating tools and renegotiating contracts, which can indirectly increase security exposure if cost-cutting reduces overlapping controls or if AI adoption outpaces governance and identity/access management maturity.

1 weeks ago
Agentic AI and AI Automation in Cybersecurity Operations and Risk Management

Agentic AI and AI Automation in Cybersecurity Operations and Risk Management

Security and technology outlets highlighted a growing shift from *GenAI copilots* toward **agentic AI**—systems that can take actions autonomously or semi-autonomously—alongside warnings that governance and oversight are not keeping pace. Commentary in SC Media argued that as enterprises orchestrate hundreds or thousands of agents, traditional *human-in-the-loop* review becomes a scaling bottleneck, pushing organizations toward **human-on-the-loop** monitoring and policy-based exception handling; separate SC Media analysis cautioned CISOs to temper “hype vs. reality” expectations around agentic AI in SOC use cases due to reliability and oversight concerns. Related coverage emphasized adjacent AI risk themes, including research/analysis calling for AI systems to be constrained by values such as fairness, honesty, and transparency, and reporting on “shadow AI” contributing to higher insider-risk costs as employees use unsanctioned tools and workflows. Several items focused on operational and data-security implications of AI-enabled automation. Security Affairs described AI-assisted incident response as a way to accelerate investigations by correlating telemetry across tools, enriching alerts, and producing summaries faster than manual analyst workflows, while a SecuritySenses segment similarly framed AI as best suited for summarization/enrichment and repetitive tasks, with deterministic decisions retained by humans and with attention to securing agent communications (e.g., OWASP guidance for agents). CSO Online reported a specific AI-adjacent exposure risk: a **Google API key change** characterized as “silent” that could expose *Gemini* AI data, and also noted concerns that personal AI agents (e.g., “OpenClaw”) could be influenced by **malicious websites**. Other references in the set were unrelated to this AI/agentic-operations theme (e.g., ransomware impacting a Mississippi healthcare system, China-linked espionage using Google Sheets, legal rulings on personal data, and general conference/event or career items).

2 weeks ago
AI Agent Adoption Outpacing Safety and Governance Controls

AI Agent Adoption Outpacing Safety and Governance Controls

Organizations are rapidly expanding the use of **AI agents**—systems that can execute multi-step tasks with limited human supervision—while governance, safety, and oversight controls lag behind. Deloitte’s *State of AI in the Enterprise* survey of 3,200+ business leaders across 24 countries reported **23%** of companies already using AI agents “at least moderately,” projected to rise to **74%** within two years, while only about **21%** said they have robust safety and oversight mechanisms in place. Separately, commentary warning about AI-enabled intrusion acceleration cited a purported “**GTG-1002**” campaign in which AI agents allegedly automated most of the intrusion lifecycle and compressed response windows, arguing that traditional SOC processes struggle against autonomous, high-velocity adversary tradecraft. Multiple other items in the set focus on broader *responsible AI* and policy concerns rather than a single security incident: an interview-style piece describes how “responsible AI” functions inside a large vendor’s product process, and another report highlights expert concerns about deploying LLM tools in **law enforcement** workflows (e.g., summarizing body camera transcripts or generating crime scene photo descriptions) given risks like hallucinations and bias. A separate business-leadership article frames cybersecurity and AI as strategic imperatives amid geopolitical instability but does not provide incident-specific or vulnerability-specific details. Overall, the material is best characterized as **governance and risk posture** coverage around agentic AI rather than a unified, verifiable breach or vulnerability disclosure.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.