Skip to main content
Mallory
Mallory

UK Regulators Fine Online Platforms for Failing to Implement Effective Age Assurance

regulatory finesage verificationonline safety actage assuranceofcomadult contentdata protectionchildren's dataprivacycredit card checks
Updated February 25, 2026 at 01:11 PM7 sources
UK Regulators Fine Online Platforms for Failing to Implement Effective Age Assurance

Get Ahead of Threats Like This

Know if you're exposed — before adversaries strike.

UK regulators issued major penalties against online services for inadequate age assurance controls intended to protect children. The Information Commissioner’s Office (ICO) fined Reddit £14.47 million for unlawfully processing children’s data, alleging that despite a stated under-13 prohibition, Reddit did not introduce an age assurance mechanism until July 2025 and had not completed a required data protection impact assessment (DPIA) before January 2025. The ICO said these failures potentially exposed minors to inappropriate content and left under-13 users’ personal data collected and used without a lawful basis; Reddit said it intends to appeal.

Separately, communications regulator Ofcom fined porn operator 8579 LLC £1.35 million under the UK Online Safety Act for failing to deploy “highly effective” age checks (e.g., photo ID matching or credit card checks) to prevent minors from accessing adult content. Ofcom also imposed an additional £50,000 penalty for allegedly ignoring information requests and warned of an ongoing £1,000/day penalty until compliant age verification is implemented, amid broader concerns from civil liberties groups about the privacy and cybersecurity risks of stringent age-verification regimes.

Sources

Related Stories

Regulatory Push to Strengthen Child Online Safety and Age Assurance on Social Platforms

Regulatory Push to Strengthen Child Online Safety and Age Assurance on Social Platforms

UK regulators **Ofcom** and the **Information Commissioner’s Office (ICO)** issued warnings to major social media and video platforms (including **Facebook, Instagram, Snapchat, TikTok, and YouTube**) demanding “urgent steps” to implement more robust **age assurance** controls to prevent access by children under 13. Regulators signaled potential enforcement if platforms continue relying primarily on easily bypassed self-declared ages, arguing this enables unlawful collection and use of children’s data and exposes under-13s to services not designed for them; Ofcom requested companies report back on their plans by the end of April. In the US, the House Energy and Commerce Committee advanced the **Kids Internet and Digital Safety (KIDS) Act** on a party-line vote amid Democratic objections that the bill could reduce platform accountability for harms to minors. Criticisms focused on provisions that could **preempt certain state laws**, a **knowledge requirement** that opponents argue may let companies claim ignorance of minors’ presence, and the lack of a proactive **“duty of care”** requirement; proposed amendments to strengthen protections were not adopted.

5 days ago
Regulatory Push for Online Age Verification and Adult-Site Access Restrictions

Regulatory Push for Online Age Verification and Adult-Site Access Restrictions

A growing regulatory push to require **online age verification**—particularly for access to pornography and other age-restricted content—is accelerating in the U.S. and U.K., with policymakers framing it as a child-safety measure and critics warning of privacy and free-speech risks. An **FTC** commissioner publicly endorsed age verification as a tool to protect children online, pointing to widespread state-level adoption in the U.S. and noting that court outcomes have been mixed, including a **U.S. Supreme Court** decision upholding a Texas law requiring pornography sites to verify users’ ages. In the U.K., the **Online Safety Act (OSA)** is driving direct service changes: **Aylo** (parent company of Pornhub and other tube sites) said it will **restrict access in the United Kingdom** rather than implement the OSA’s age-checking approach for all visitors, while allowing continued access for users who have already verified their identity. Aylo argued the framework diverts traffic to unregulated sites and creates privacy risks, while **Ofcom** countered that services can either implement compliant age checks or block U.K. access and urged development of effective device-level solutions.

1 months ago
Regulatory and legal scrutiny of online platforms over child safety, age verification, and gambling-like mechanics

Regulatory and legal scrutiny of online platforms over child safety, age verification, and gambling-like mechanics

New York Attorney General Letitia James filed suit against **Valve**, alleging *Steam*’s loot boxes and the broader **skin economy** enable “illegal gambling,” including through third-party sites that let users resell in-game items for cash and use Steam inventories as virtual chips for gambling. The complaint argues Valve has only “sporadically enforced” rules against skin-gambling sites and seeks changes to or elimination of loot boxes plus consumer restitution/disgorgement; the reporting also notes prior (dismissed) parent lawsuits and earlier pressure from Washington state to crack down on skin gambling. Separate legal and policy actions focused on **child safety and age assurance** across major platforms. Los Angeles County sued **Roblox**, alleging the platform misled parents about safety while exposing children to grooming and explicit content, and highlighting historical gaps in messaging controls and weak age verification; the suit also points to Roblox’s more recent use of third-party *Persona* facial age checks to access chat features. Court filings in a multidistrict litigation against **Meta/Instagram** surfaced internal discussions (including then-CISO Guy Rosen) indicating executives were aware as early as 2018 that adults could message minors with explicit content; Instagram’s client-side classifier that blurs explicit images for teens reportedly did not roll out until 2024. In parallel, **Discord** paused and reworked a planned global age-verification policy after backlash, delaying rollout to the second half of 2026 and committing to additional verification options (beyond government ID/video selfies), vendor transparency, and a technical explanation of its “age determination systems.”

2 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed — before adversaries strike.