Skip to main content
Mallory
Mallory

US government and industry expand AI and critical-infrastructure cyber information sharing efforts

information sharingcritical infrastructurethreat intelligencenational cyber directoragentic aiai securitycisaai policyai-isacmfg-isacoperational technologyincident responseisacdhsundersea cables
Updated February 3, 2026 at 11:02 PM3 sources
US government and industry expand AI and critical-infrastructure cyber information sharing efforts

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

US cybersecurity officials said work is underway to stand up new and expanded government–industry mechanisms for sharing threat intelligence, with a particular focus on AI security and operational technology (OT) risks to critical infrastructure. CISA executive assistant director Nick Andersen said an AI Information-Sharing and Analysis Center (AI-ISAC) ordered by the White House is still in an “ongoing policy dialogue” phase, with stakeholders trying to resource the effort and avoid duplicating existing private-sector information-sharing initiatives; he also said there is no launch timeline and described the effort as moving through a “pre-decisional” process. In parallel, Andersen said DHS/CISA is planning a replacement for the disbanded Critical Infrastructure Partnership Advisory Council (CIPAC), aiming to correct gaps in the prior structure—particularly the lack of an explicit cybersecurity charter—and to enable more targeted focus groups on issues such as undersea cables and OT systems.

Separately, the White House Office of the National Cyber Director said it is developing an AI security policy framework intended to embed security controls into AI “tech stacks” in coordination with the Office of Science and Technology Policy, citing risks such as data poisoning and the potential for agentic capabilities to accelerate intrusions. In the private sector, the Manufacturing ISAC (MFG-ISAC) reported increased collaboration to address rising threats to manufacturing, including OT-focused initiatives such as tabletop exercises, OT training, and development of incident response playbooks and OT threat guidelines, alongside preparation for updated CMMC requirements—reinforcing the broader push toward structured, sector-based information sharing and readiness for critical-infrastructure cyber threats.

Related Stories

Trump Administration Cyber Strategy Emphasizes Secure AI Adoption and Industry Coordination

Trump Administration Cyber Strategy Emphasizes Secure AI Adoption and Industry Coordination

The White House Office of the National Cyber Director (ONCD) said a forthcoming U.S. national cyber strategy will prioritize **rapid but secure adoption of AI** for cyber defense, aiming to expand the use of AI-enabled tools to *detect, divert, and deceive* threat actors without unintentionally widening the attack surface. ONCD policy lead Alexandra Seymour also highlighted plans to advance U.S. **AI cybersecurity standards**, establish industry best practices for secure AI deployment, and pursue “counter-AI” efforts to protect frontier models and counter adversary use of AI. The strategy is also expected to include a pillar focused on strengthening the cybersecurity workforce by aligning curriculum, workforce standards, cyber literacy, and job placement across government, industry, and academia. Separately, ONCD indicated U.S. cyber responses will be more explicitly **linked to adversary actions** and will require closer coordination with **state/local governments and critical infrastructure owners/operators**, reflecting a more assertive posture driven in part by recent high-profile intrusions into U.S. critical infrastructure (including telecom). In parallel with these federal strategy signals, the U.S. Treasury Department announced it will publish a set of resources created by a public-private coalition to improve **cyber risk management for AI systems in the financial sector**, intended to support secure AI adoption as banks expand AI use for fraud detection, customer service, trading, and risk modeling—areas that can introduce new vulnerabilities due to sensitive data dependencies and third-party/vendor exposure.

3 weeks ago

AI Security Risks and Guidance for Critical Infrastructure and Enterprises

Recent developments highlight the growing security risks associated with the integration of artificial intelligence (AI) into enterprise and operational technology (OT) environments. The U.S. Cybersecurity and Infrastructure Security Agency (CISA), in collaboration with several international partners, has released new guidance outlining key principles for the secure deployment of AI in OT systems, emphasizing the need for critical infrastructure operators to address unique risks such as process model drift and safety-process bypasses. This guidance is expected to influence regulatory approaches as organizations rapidly adopt AI technologies, often without sufficient security rigor. Concurrently, research from NVIDIA and Lakera AI has introduced a comprehensive framework for evaluating the safety and security of agentic AI systems, which autonomously plan and make decisions, revealing new classes of risks including prompt injection, memory poisoning, and tool misuse that can lead to harmful outcomes even when underlying models function as intended. Industry leaders and CISOs are increasingly recognizing the necessity of offensive security strategies and holistic approaches to address the evolving threat landscape, particularly as AI-driven attacks become more sophisticated. The energy sector, for example, faces heightened threats due to geopolitical tensions and the proliferation of AI-enabled attack tools, prompting calls for multilayered security concepts and proactive measures. As enterprises and critical infrastructure operators accelerate AI adoption, the convergence of new technical frameworks, regulatory guidance, and evolving security practices underscores the urgent need for robust, adaptive defenses against emerging AI-related threats.

3 months ago
CISA Guidance Highlights AI Risk in Operational Technology and Critical Infrastructure

CISA Guidance Highlights AI Risk in Operational Technology and Critical Infrastructure

The U.S. **Cybersecurity and Infrastructure Security Agency (CISA)** issued new guidance warning that expanding use of **AI—particularly generative AI tools—in operational technology (OT)** can increase risk across critical infrastructure environments such as power, water, pipelines, and industrial processes. The guidance emphasizes that OT systems historically lag in cybersecurity maturity and are increasingly exposed as they become more internet-connected and integrated with **Industrial IoT (IIoT)** sensors and remote operations; it also flags organizational challenges such as OT security skill gaps and the likelihood of “shadow AI” use even where tools are formally restricted. Separate industry commentary reinforced that AI adoption in OT is accelerating and will increasingly move from monitoring to **recommendation and automated action**, raising the stakes because failures can have physical consequences and cascading operational disruption. Additional perspectives highlighted broader **cyber-physical resilience** issues—arguing that enterprises often fail to integrate physical and cyber security programs effectively—and pointed to basic infrastructure dependencies (e.g., **power redundancy and misconfigured backup power**) as underappreciated factors that can turn outages into major security and safety incidents in converged IT/OT environments.

2 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.