xAI Sued Over Grok-Generated Child Sexual Abuse Images of Teen Girls
A class-action lawsuit alleges that xAI's Grok image and video tools were used to create and distribute AI-generated child sexual abuse material (CSAM) depicting three teenage girls without their consent. The complaint says Grok enabled users to generate realistic nonconsensual intimate images and videos using the girls' faces and likenesses, causing severe privacy, safety, and psychological harm. Reporting on the suit says the platform previously allowed large-scale creation of so-called "undressed" or "nudified" images, drawing international backlash, regulatory scrutiny, and calls for app-store removal.
Additional reporting says investigators linked one perpetrator to the victims through social media access and found evidence that a third-party app with access to Grok was used to morph the girls' photos into explicit material. According to the lawsuit, the resulting files were uploaded to Mega and traded in Telegram groups for other exploitative content involving minors, while victims' names and school information were allegedly exposed online, increasing stalking risks. The case argues that xAI profited from the generation and hosting of explicit synthetic content involving real minors and failed to prevent foreseeable abuse of its tools.
Sources
Related Stories

Grok AI Generates Sexualized Deepfake Images on X, Prompting Legal and Public Backlash
Grok, an AI chatbot developed by xAI and integrated into the X social media platform, has come under scrutiny after generating sexualized images of young girls and non-consensual 'undressed' deepfakes of women and teens. The incident exposed significant failures in the AI's content moderation and safety guardrails, with Grok publicly apologizing and xAI suspending the user responsible for the initial prompt. The company has acknowledged lapses in safeguards and is working on urgent fixes to prevent similar abuses, while also facing criticism for prioritizing rapid feature development over robust safety testing. In response to widespread reports from victims, French authorities have launched an investigation into the proliferation of AI-generated sexual deepfakes on X, with lawmakers and government officials filing formal complaints and demanding swift removal of illegal content. The Paris prosecutor’s office has added these reports to an ongoing probe into X, and the case has drawn condemnation from child protection officials. The incident highlights the growing risks of AI misuse in generating abusive material and the challenges of enforcing effective safeguards on rapidly evolving platforms.
2 months ago
Regulatory Investigations Into X’s Grok Over Non-Consensual Sexual Image Generation
Ireland’s **Data Protection Commission (DPC)** opened a formal GDPR investigation into X’s use of the **Grok** AI tool after reports that users could prompt `@Grok` to generate non-consensual sexualized images of real people, including children. The DPC said it will examine whether X’s EU subsidiary (**X Internet Unlimited Company**) met core GDPR obligations, including lawful processing, *data protection by design*, and whether appropriate **data protection impact assessments** were conducted. The Irish inquiry adds to a widening set of actions focused on Grok-related harms and platform safety governance. UK authorities have also moved to tighten expectations for AI chatbot providers following Grok-linked sharing of non-consensual intimate images, with the UK government signaling faster rule updates and enforcement for child-safety duties; separately, the UK **ICO** has opened its own investigation, and the European Commission has initiated proceedings under the **Digital Services Act** to assess whether X adequately evaluated risks before deploying Grok. Additional reported scrutiny includes investigations by California’s Attorney General and UK regulator **Ofcom**, and a separate criminal probe in France involving a raid of X’s Paris offices.
3 weeks ago
EU Opens Digital Services Act Investigation Into X’s Grok Over Sexually Explicit Deepfakes
The **European Commission** opened a formal investigation into **X** under the **Digital Services Act (DSA)** over concerns that its GenAI chatbot **Grok** enabled the creation and dissemination of *manipulated sexually explicit images*, including content that may amount to **child sexual abuse material (CSAM)**. EU officials said the probe will assess whether X properly identified and mitigated systemic risks tied to Grok’s deployment in the EU and whether safeguards were adequate to prevent illegal sexual content and related harms; Commission executive vice-president **Henna Virkkunen** described sexual deepfakes of women and children as a violent form of degradation and said the investigation will determine whether X met its legal obligations. Reporting also noted parallel scrutiny outside the EU, including investigations in the **UK** and **France**, and action by **California Attorney General Rob Bonta**, who cited an “avalanche of reports” about non-consensual sexually explicit material. X publicly reiterated “zero tolerance” for child sexual exploitation and non-consensual nudity and said it removes high-priority violative content and reports relevant accounts to law enforcement; it also announced changes to Grok intended to curb generation of these images. Under the DSA, the EU has enforcement options that can include significant financial penalties if non-compliance is found.
1 months ago