California Digital Age Assurance Act Mandates OS-Level Age Signaling via API
California’s Digital Age Assurance Act (AB 1043) establishes a statewide requirement for operating system providers to collect a user’s age information during OS account setup and expose an age-range signal to application developers through a “reasonably consistent” real-time API when an app is downloaded or launched. The law’s definition of OS provider is broad enough to include major commercial platforms (Windows, macOS, Android, iOS) as well as Linux distributions and Valve’s SteamOS, and it specifies four age brackets: under 13, 13 to under 16, 16 to under 18, and 18+. Developers who request and receive the signal are treated as having “actual knowledge” of the user’s age range, shifting compliance and content-suitability liability toward app providers; enforcement is assigned to the California Attorney General with penalties up to $2,500 per affected child for negligent violations and $7,500 for intentional violations.
Separate reporting highlights broader age/identity verification pressure across consumer platforms, including Discord’s planned move to require age verification in 2026, which has triggered privacy concerns about submitting government IDs or face scans and renewed scrutiny following a prior breach that exposed IDs for roughly 70,000 users. Other items in the set are not about AB 1043 or OS-level age signaling and instead cover general security roundups, interviews, conference write-ups, exam prep material, and unrelated policy or industry commentary; they do not add substantiated details about the California OS age-verification mandate or its implementation requirements.
Related Entities
Organizations
Sources
Related Stories

Apple Expands App Store Age Assurance and 18+ Download Restrictions
Apple introduced expanded *age assurance* capabilities for the App Store to support compliance with new or emerging regulations in multiple jurisdictions, including Brazil, Australia, Singapore, Utah, and Louisiana. As of **Feb. 24, 2026**, Apple began blocking downloads of **18+ rated apps** in Brazil, Australia, and Singapore unless the user is confirmed to be an adult, using what Apple describes as “reasonable methods” for age confirmation. Apple also expanded the **Declared Age Range API** (iOS/iPadOS/macOS) and related platform components (including PermissionKit’s *Significant Change API*, a new StoreKit age-rating property type, and App Store Server Notifications) to provide developers with an age category plus signals about the assurance method and whether regulatory requirements apply; in Brazil, certain disclosures (e.g., loot boxes) can drive an app’s rating to **18+**. Broader policy debate continues around online age assurance in the U.S. and internationally, with jurisdictions adopting or considering stricter mandates and platforms preparing new verification requirements. Public skepticism remains elevated due to backlash against age-gating (including reported VPN usage spikes in response to the UK’s requirements) and concerns about data security following breaches at age-verification providers (e.g., **Sumsub** disclosing a previously undetected 2024 compromise). The policy environment is also being shaped by U.S. state laws and litigation, including the Supreme Court’s decision in *Free Speech Coalition v. Paxton* upholding Texas’s age verification law, while proponents argue that privacy-preserving age assurance approaches are becoming more technically mature and scalable.
2 weeks ago
Debate Over Kids Online Safety Act and Age-Verification Requirements for Minors
Policymakers in multiple jurisdictions are advancing **child online safety** rules that would restrict minors’ access to social media, “addictive” product features, and certain content (including pornography), increasing pressure on platforms to implement **age assurance/age verification** to determine users’ ages before allowing access. The Lawfare analysis highlights that while protecting children online is a widely shared goal, enforcing age-based restrictions at scale effectively requires collecting and validating age signals for *all* users—raising significant implementation, privacy, and governance challenges as governments consider measures such as the **Kids Online Safety Act (KOSA)**, the **Kids Off Social Media Act**, and the **App Store Accountability Act**.
1 months ago
Discord Global Age Verification Rollout After Third-Party ID Image Breach
**Discord** announced a phased global rollout requiring users to verify their age using **video selfies or government IDs**, citing growing regulatory pressure for age checks on social platforms and a goal of providing a “teen-appropriate experience by default.” Discord said the verification data will be **deleted immediately after age is confirmed** and claimed it **will not leave the user’s device**; the company also described new defaults that restrict access to age-gated features (e.g., blurring sensitive content and limiting age-restricted channels/commands to verified adults). The rollout is expected to begin in early March, following earlier “teen-by-default” measures introduced in the U.K. and Australia. The policy change triggered backlash in gaming communities due to privacy and breach concerns, amplified by a prior incident in which **roughly 70,000 images of government IDs** were exposed after users had uploaded them for customer service purposes; reporting attributes the exposure to a **third-party service** Discord used to manage data. Discord is attempting to reassure users by pointing to tightened controls and a partnership with *k-ID* for age checks, but critics highlighted perceived ambiguity in how ID scans may be handled (including potential uploads to vendor servers and involvement of additional third parties), and warned that expanding collection of sensitive identity data increases the platform’s attractiveness as a target.
1 months ago