Skip to main content
Mallory
Mallory

US Policy Actions on AI Governance, Standards, and Transparency

ai governancenist standardstraining-data transparencyai disclosuresdepartment of labornciiworkforce surveys
Updated March 8, 2026 at 05:05 AM3 sources
US Policy Actions on AI Governance, Standards, and Transparency

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

US policymakers and regulators advanced multiple AI governance initiatives spanning labor-market measurement, standards-setting, and training-data transparency. Nine US senators urged the Department of Labor, the Bureau of Labor Statistics, and the Census Bureau to expand federal surveys (including the Current Population Survey, JOLTS, and the National Longitudinal Survey) to better quantify AI-driven workforce disruption and potential job growth, arguing current public data is insufficient to track AI’s economic impacts.

Separately, a federal judge denied xAI’s attempt to block a California law requiring disclosures about AI training datasets, finding the company did not sufficiently show the disclosures would reveal protectable trade secrets or violate First/Fifth Amendment rights; the case unfolded amid heightened scrutiny of Grok over harmful outputs (including allegations involving antisemitic content and generation of NCII/CSAM). In Washington, a nominee to lead NIST told lawmakers he would prioritize AI metrology and global standards leadership—framing standards as economically and strategically important—while also emphasizing support for advanced semiconductor manufacturing and alignment with the administration’s AI and industrial policy priorities.

Related Stories

US Congress Advances AI Legislation on Public Awareness and Chip Export Controls

US Congress Advances AI Legislation on Public Awareness and Chip Export Controls

U.S. lawmakers introduced and advanced multiple **artificial intelligence policy bills** spanning education, public awareness, and safety requirements. Proposed measures include the *Expanding AI Voices Act* to codify the National Science Foundation’s *ExpandAI* program and broaden AI education and workforce development access (including for minority-serving institutions, rural universities, and first-generation students), and the *Artificial Intelligence Public Awareness and Education Campaign Act* directing the Department of Commerce to run a public campaign on AI risks/benefits, individual rights, identifying AI-generated content, and AI’s prevalence in daily life. Separate legislation was also described that would require **age verification and protections for minors** using AI chatbots. In parallel, the House Foreign Affairs Committee advanced the **AI Overwatch Act**, which would shift greater authority over exports of high-performance, data center-class AI processors to Congress, expanding oversight beyond the Department of Commerce’s Bureau of Industry and Security. The proposal would codify performance thresholds that still allow certain lower-tier accelerators (e.g., Nvidia **H20** and AMD **MI308**) to ship to non-blacklisted entities in adversary nations without a license, while subjecting higher-performance parts (e.g., Nvidia **H200** and AMD **MI325X**) to export controls plus **congressional review/veto**; it would also terminate existing licenses and impose a temporary blanket denial pending submission of a new national security strategy.

1 months ago
Policy and industry debate over AI safety, governance, and data protection

Policy and industry debate over AI safety, governance, and data protection

U.S. policymakers and industry leaders are escalating scrutiny of **AI safety and data protection**, with a particular focus on sensitive data flows and the adequacy of existing guardrails. In a Senate HELP Committee hearing, lawmakers questioned whether federal guardrails are needed to protect Americans’ healthcare data voluntarily uploaded to AI-enabled apps and wearables that may fall outside HIPAA coverage, raising concerns about liability, downstream data use, and integration into medical records; HHS noted it is collecting public input via a request for information on safe and effective AI deployment in healthcare. Separately, commentary on AI governance and safety argues competitive pressure among frontier AI labs can erode safety practices and that clearer antitrust guidance could enable cross-industry collaboration on safety standards without triggering enforcement risk. Tensions over AI “red lines” in national security use also became more public, as **Anthropic** CEO Dario Amodei accused **OpenAI** of misleading messaging about defense work amid reports that Anthropic’s DoD talks faltered over restrictions related to mass domestic surveillance and autonomous weapons, while OpenAI described its agreement as permitting “all lawful purposes” alongside stated prohibitions. Broader, non-incident reporting highlighted enterprise investment to support *agentic AI* (with many data leaders citing governance lagging AI adoption) and general concerns about deepfakes, opaque models, and societal risk; however, several items in the set were primarily newsletters, vendor/industry promotion, or general-interest AI commentary rather than a single, discrete cybersecurity incident or vulnerability disclosure.

1 weeks ago
AI Adoption and Governance Updates Across Industry and Government

AI Adoption and Governance Updates Across Industry and Government

Recent coverage focused on **AI adoption, governance, and societal impacts** rather than a discrete cybersecurity incident. OpenAI CEO **Sam Altman** argued that comparing AI energy use to human cognition is “unfair,” claiming the energy cost of “training a human” (years of living and food consumption plus evolutionary history) should be considered when judging AI efficiency, and separately warned that some companies are engaging in **“AI washing”**—attributing layoffs to AI as a pretext for workforce reductions—while also acknowledging real job displacement is likely to become more noticeable in the next few years. Enterprises and public-sector organizations highlighted practical AI rollouts and associated risk considerations. **Intel** introduced *Ask Intel*, a support assistant built on **Microsoft Copilot Studio**, alongside a shift away from public phone support toward web-based case handling, while noting response accuracy “cannot be guaranteed.” **Microsoft** removed a blog post that had described training LLMs using a Kaggle dataset derived from **pirated Harry Potter ebooks**, amid ongoing legal uncertainty around fair use and potential contributory infringement exposure. Separately, U.S. federal officials emphasized **targeted AI adoption** and expectation management (with the VA reporting hundreds of AI use cases), while other items included a hobbyist AI dashboard project shared on GitHub and a generic startup article on AI-accelerated MVP development—neither of which provided substantive security-relevant disclosures.

3 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.