US Congress Advances AI Legislation on Public Awareness and Chip Export Controls
U.S. lawmakers introduced and advanced multiple artificial intelligence policy bills spanning education, public awareness, and safety requirements. Proposed measures include the Expanding AI Voices Act to codify the National Science Foundation’s ExpandAI program and broaden AI education and workforce development access (including for minority-serving institutions, rural universities, and first-generation students), and the Artificial Intelligence Public Awareness and Education Campaign Act directing the Department of Commerce to run a public campaign on AI risks/benefits, individual rights, identifying AI-generated content, and AI’s prevalence in daily life. Separate legislation was also described that would require age verification and protections for minors using AI chatbots.
In parallel, the House Foreign Affairs Committee advanced the AI Overwatch Act, which would shift greater authority over exports of high-performance, data center-class AI processors to Congress, expanding oversight beyond the Department of Commerce’s Bureau of Industry and Security. The proposal would codify performance thresholds that still allow certain lower-tier accelerators (e.g., Nvidia H20 and AMD MI308) to ship to non-blacklisted entities in adversary nations without a license, while subjecting higher-performance parts (e.g., Nvidia H200 and AMD MI325X) to export controls plus congressional review/veto; it would also terminate existing licenses and impose a temporary blanket denial pending submission of a new national security strategy.
Sources
Related Stories
US Legislative Actions Targeting AI and Cybersecurity in National Security Context
The US Senate has introduced the Secure and Feasible Exports Act (SAFE), a bipartisan bill aimed at restricting the export of advanced AI chips, such as Nvidia's Blackwell and Hopper GPUs, to countries considered adversaries, including China and Russia. The bill would halt export licenses for these chips for 30 months, impacting not only Nvidia but also AMD and Google's latest AI hardware. Despite these measures, industry experts note that training workloads still heavily depend on Nvidia hardware, and there are multiple avenues for circumventing such export controls, making a complete withdrawal from the Chinese market unlikely. Simultaneously, the fiscal 2026 National Defense Authorization Act (NDAA) includes several cybersecurity provisions relevant to AI and national security. The NDAA mandates secure mobile phones for senior Defense Department leaders, updates cybersecurity training to address AI-specific threats, and ensures mental health support for cyber personnel. These legislative efforts reflect a broader US strategy to strengthen national security by controlling access to advanced AI technologies and enhancing the cybersecurity posture of defense operations.
3 months ago
Proposed US Export Controls for Advanced AI Accelerators
The U.S. Department of Commerce is preparing a **sweeping, tiered export-control regime** for advanced AI accelerators from U.S. vendors such as **Nvidia** and **AMD**, expanding beyond country-specific restrictions into a broader licensing framework that could require U.S. approval for a wide range of global shipments. Reporting describes a multi-level structure tied to computing scale: smaller shipments (e.g., up to **1,000 Nvidia GB300 GPUs**) would face an expedited review, while mid-scale deployments would require **pre-authorization** before an export license application, along with compliance measures such as operational transparency, disclosure of business activities, and potential **on-site inspections** by U.S. authorities. For very large AI clusters (described as deployments on the order of **200,000 GB300 GPUs** operated by a single entity in one country), the proposed approach would elevate requirements to **government-to-government engagement** and could condition approvals on commitments to **invest in U.S. AI infrastructure** as part of national-security assurances. Separate reporting in the set covers adjacent U.S. technology-security policy issues—NIST leadership testimony on AI standards and manufacturing priorities, China’s semiconductor industry calls to consolidate efforts to build an ASML alternative under export-control pressure, and CFIUS deliberations over Tencent’s stakes in major game companies—but those items are not the same event as the AI-accelerator export-rule proposal and should be treated as distinct policy stories.
1 weeks ago
US Policy Actions on AI Governance, Standards, and Transparency
US policymakers and regulators advanced multiple **AI governance** initiatives spanning labor-market measurement, standards-setting, and training-data transparency. Nine US senators urged the Department of Labor, the Bureau of Labor Statistics, and the Census Bureau to expand federal surveys (including the *Current Population Survey*, *JOLTS*, and the *National Longitudinal Survey*) to better quantify AI-driven workforce disruption and potential job growth, arguing current public data is insufficient to track AI’s economic impacts. Separately, a federal judge denied **xAI**’s attempt to block a California law requiring disclosures about AI training datasets, finding the company did not sufficiently show the disclosures would reveal protectable trade secrets or violate First/Fifth Amendment rights; the case unfolded amid heightened scrutiny of *Grok* over harmful outputs (including allegations involving antisemitic content and generation of NCII/CSAM). In Washington, a nominee to lead **NIST** told lawmakers he would prioritize AI metrology and global standards leadership—framing standards as economically and strategically important—while also emphasizing support for advanced semiconductor manufacturing and alignment with the administration’s AI and industrial policy priorities.
1 weeks ago