Platforms Expand Identity and Age-Verification Features for Privacy and Adult-Content Access
Google upgraded its Results About You safety feature to detect and request removal of additional sensitive identifiers exposed in Search results, including government ID numbers such as passport numbers, driver’s license numbers, and Social Security numbers. The update also streamlines Google’s process for reporting and removing non-consensual explicit imagery (NCEI), including deepfakes and other AI-generated sexualized content, reflecting increased platform focus on limiting the discoverability of highly sensitive personal data and abusive imagery.
Discord announced it will begin requiring age verification for access to adult content globally, using either an ID upload or an AI-based video selfie to estimate age. Discord stated that verification data will not be retained by Discord or its verification provider, claiming face scans will not be collected and ID images will be deleted after verification, highlighting ongoing industry movement toward stronger identity/age-gating controls alongside privacy assurances about data handling.
Sources
Related Stories

Discord Global Age Verification Rollout After Third-Party ID Image Breach
**Discord** announced a phased global rollout requiring users to verify their age using **video selfies or government IDs**, citing growing regulatory pressure for age checks on social platforms and a goal of providing a “teen-appropriate experience by default.” Discord said the verification data will be **deleted immediately after age is confirmed** and claimed it **will not leave the user’s device**; the company also described new defaults that restrict access to age-gated features (e.g., blurring sensitive content and limiting age-restricted channels/commands to verified adults). The rollout is expected to begin in early March, following earlier “teen-by-default” measures introduced in the U.K. and Australia. The policy change triggered backlash in gaming communities due to privacy and breach concerns, amplified by a prior incident in which **roughly 70,000 images of government IDs** were exposed after users had uploaded them for customer service purposes; reporting attributes the exposure to a **third-party service** Discord used to manage data. Discord is attempting to reassure users by pointing to tightened controls and a partnership with *k-ID* for age checks, but critics highlighted perceived ambiguity in how ID scans may be handled (including potential uploads to vendor servers and involvement of additional third parties), and warned that expanding collection of sensitive identity data increases the platform’s attractiveness as a target.
1 months ago
Persona Age-Verification Frontend Exposure Raises Privacy and Surveillance Concerns for Discord Users
Security researchers investigating Discord’s UK age-verification rollout reported finding a **publicly exposed Persona frontend** (the identity-verification vendor used by Discord) on a **US government–authorized endpoint**, with **2,456 accessible files**. The exposed materials (since removed) allegedly revealed Persona’s broader **KYC/AML and surveillance-oriented capabilities** beyond age estimation, including **269 verification checks**, facial recognition comparisons against **watchlists** and **politically exposed persons (PEP)** lists, “adverse media” screening across multiple categories (including terrorism/espionage), and the generation of risk/similarity scores. The reporting also described extensive data collection/retention claims, including IP addresses, browser/device fingerprints, government ID numbers, phone numbers, names, faces, and “selfie” analytics, with retention described as up to **three years**. The discovery intensified backlash over Discord’s requirement that some users verify age (including via face scanning) to restore full functionality, and it fueled online allegations that the tooling could enable creation of broader watchlists. Persona publicly disputed insinuations of improper government ties and stated it invests in compliance and controls to protect sensitive data; it also said investors do not have access to Persona data and denied operational involvement by specific investors cited in the controversy. Ars Technica reported that OpenAI did not immediately respond to a request for comment regarding claims about an internal database related to Persona identity checks, while Persona characterized circulating claims as misleading and said any potential government engagements would be limited to workforce account security and exclude DHS/ICE.
3 weeks ago
Google Search Expands Removal Tools for Non-Consensual Explicit Imagery and Exposed Government ID Numbers
Google rolled out expanded Search controls aimed at reducing harm from **non-consensual explicit imagery** (including AI-generated deepfakes) and exposure of **sensitive personal identifiers**. Users can now request de-indexing of explicit images of themselves directly from Search results (e.g., via the three-dot menu → `Remove result` → “It shows a sexual image of me”), submit multiple images in one request, indicate whether content is real or a deepfake, and access links to legal and emotional support resources during the process. Google also added an opt-in safeguard intended to proactively filter similar explicit results from appearing in future searches. Google also enhanced the *Results about you* hub to monitor and help remove Search results containing government-issued ID numbers, including **Social Security numbers**, **driver’s license numbers**, **passport numbers**, and similar identifiers. Users provide the contact details and ID numbers they want monitored; when matching results appear in Search, Google notifies the user so they can request removal. Google emphasized that removing a result from Search does not delete the underlying content from the web, and that requests are subject to internal security/privacy checks to reduce misuse.
1 months ago