Regulatory and legal scrutiny of online platforms over child safety, age verification, and gambling-like mechanics
New York Attorney General Letitia James filed suit against Valve, alleging Steam’s loot boxes and the broader skin economy enable “illegal gambling,” including through third-party sites that let users resell in-game items for cash and use Steam inventories as virtual chips for gambling. The complaint argues Valve has only “sporadically enforced” rules against skin-gambling sites and seeks changes to or elimination of loot boxes plus consumer restitution/disgorgement; the reporting also notes prior (dismissed) parent lawsuits and earlier pressure from Washington state to crack down on skin gambling.
Separate legal and policy actions focused on child safety and age assurance across major platforms. Los Angeles County sued Roblox, alleging the platform misled parents about safety while exposing children to grooming and explicit content, and highlighting historical gaps in messaging controls and weak age verification; the suit also points to Roblox’s more recent use of third-party Persona facial age checks to access chat features. Court filings in a multidistrict litigation against Meta/Instagram surfaced internal discussions (including then-CISO Guy Rosen) indicating executives were aware as early as 2018 that adults could message minors with explicit content; Instagram’s client-side classifier that blurs explicit images for teens reportedly did not roll out until 2024. In parallel, Discord paused and reworked a planned global age-verification policy after backlash, delaying rollout to the second half of 2026 and committing to additional verification options (beyond government ID/video selfies), vendor transparency, and a technical explanation of its “age determination systems.”
Sources
Related Stories

Regulatory Push to Strengthen Child Online Safety and Age Assurance on Social Platforms
UK regulators **Ofcom** and the **Information Commissioner’s Office (ICO)** issued warnings to major social media and video platforms (including **Facebook, Instagram, Snapchat, TikTok, and YouTube**) demanding “urgent steps” to implement more robust **age assurance** controls to prevent access by children under 13. Regulators signaled potential enforcement if platforms continue relying primarily on easily bypassed self-declared ages, arguing this enables unlawful collection and use of children’s data and exposes under-13s to services not designed for them; Ofcom requested companies report back on their plans by the end of April. In the US, the House Energy and Commerce Committee advanced the **Kids Internet and Digital Safety (KIDS) Act** on a party-line vote amid Democratic objections that the bill could reduce platform accountability for harms to minors. Criticisms focused on provisions that could **preempt certain state laws**, a **knowledge requirement** that opponents argue may let companies claim ignorance of minors’ presence, and the lack of a proactive **“duty of care”** requirement; proposed amendments to strengthen protections were not adopted.
5 days ago
UK Regulators Fine Online Platforms for Failing to Implement Effective Age Assurance
UK regulators issued major penalties against online services for inadequate **age assurance** controls intended to protect children. The Information Commissioner’s Office (**ICO**) fined **Reddit £14.47 million** for unlawfully processing children’s data, alleging that despite a stated under-13 prohibition, Reddit did not introduce an age assurance mechanism until **July 2025** and had not completed a required **data protection impact assessment (DPIA)** before **January 2025**. The ICO said these failures potentially exposed minors to inappropriate content and left under-13 users’ personal data collected and used without a lawful basis; Reddit said it intends to appeal. Separately, communications regulator **Ofcom** fined porn operator **8579 LLC £1.35 million** under the UK **Online Safety Act** for failing to deploy “highly effective” age checks (e.g., photo ID matching or credit card checks) to prevent minors from accessing adult content. Ofcom also imposed an additional **£50,000** penalty for allegedly ignoring information requests and warned of an ongoing **£1,000/day** penalty until compliant age verification is implemented, amid broader concerns from civil liberties groups about the privacy and cybersecurity risks of stringent age-verification regimes.
2 weeks ago
Meta Expands Safety and Enforcement Measures Across Facebook and Instagram
Meta disclosed a set of new **platform safety and enforcement actions** aimed at reducing harm and abuse on its services. The company filed multiple lawsuits against alleged scam-ad operators in **Brazil, China, Vietnam** and elsewhere, describing tactics including **deepfakes/celebrity impersonation**, “celeb-bait” investment lures, and **cloaking** used to evade ad review; Meta said it also took technical steps such as disabling accounts, suspending scam-linked payment methods, and blocking associated domains, and shared information with industry partners to help them block the same actors. Separately, Meta announced new **Instagram parental-supervision alerts** that notify parents when a teen repeatedly searches for **self-harm or suicide-related terms** within a short time window (initially for supervised accounts in the **U.S., U.K., Australia, and Canada**), and said it is developing similar notifications for teens’ **AI-related conversations** about self-harm. In parallel regulatory developments, EU lawmakers advanced a non-binding opinion supporting **privacy-friendly age verification** and proposing restrictions that would require **parental consent for under-16s** and bar access for children under 13, positioning these measures for potential inclusion in a future **Digital Fairness Act** focused on child protection online, targeted advertising, and addictive design patterns.
2 weeks ago