EU also investigating as Grok generated 23,000 CSAM images in 11 days [U]
Source: 9to5Mac

The EU has opened its own investigation into the Grok chatbot generating child sexual abuse material. It’s estimated that Grok generated 23,000 CSAM images in just 11 days. Update: a second investigation has been opened in Ireland, focusing on possible privacy violations.
Despite multiple calls for Apple and Google to temporarily remove both X and Grok from the App Store, neither company has yet done so.
Grok generated 23,000 CSAM images
Like most other AI chatbots, xAI’s Grok is able to generate images from text prompts. It can do so directly in the app, on the web, or through X. Unlike other services, however, Grok has extremely loose guardrails that have seen it generating non‑consensual semi‑nude images of real individuals, including children.
Engadget reports that one estimate suggested Grok generated around 23,000 CSAM images in just 11 days.
The Center for Countering Digital Hate (CCDH) published its findings. The British nonprofit based its findings on a random sample of 20,000 Grok images from December 29 to January 9 and extrapolated a broader estimate based on the 4.6 million images Grok generated during that period.
- Over an 11‑day period, Grok generated an estimated 3 million sexualized images — including an estimated 23,000 of children.
- This works out to roughly 190 sexualized images per minute, with a sexualized image of a child produced about every 41 seconds.
EU investigation opened
Earlier this month, three US senators asked Apple CEO Tim Cook to temporarily remove both X and Grok from the App Store due to “sickening content generation”. The company has not yet done so.
Two countries have blocked the app, and investigations are already open in both California and the UK. The Financial Times reports that the EU has now opened an investigation as well.
The probe, announced under the EU’s Digital Services Act, will assess whether xAI mitigated the risks of deploying Grok’s tools on X and the proliferation of content that “may amount to child sexual abuse material”.
“Non‑consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation,” said EU tech chief Henna Virkkunen.
If the company is found to have breached the DSA, it can be fined up to 6 % of its annual global revenue.
Photo by Logan Voss on Unsplash.