Grok is still producing millions of sexualized images of adults and children

Written on 01/22/2026

A new report by the Center for Countering Digital Hate and an investigation by the New York Times lays out the sheer scale of Grok's undressing problem.A sign next to bus stop in London reads

The true scale of Grok's deepfake problem is becoming clearer as the social media platform and its AI startup xAI face ongoing investigations into the chatbot's safety guardrails.

According to a report by the Center for Countering Digital Hate (CCDH) and a joint investigation by the New York Times, Grok was still able to produce an estimated 3 million sexualized images, including 23,000 that appear to depict children over a 10-day period following xAI's supposed crackdown on deepfake "undressing." The CCDH tested a sample of responses from Grok's one-click editing tool, still available to X users, and calculated that more than half of the chatbot's responses included sexualized content.

The New York Times report found that an estimated 1.8 million of 4.4 million Grok images were sexual in nature, with some depicting well-known influencers and celebrities. The publication also linked a sharp increase in Grok usage following public posts by CEO Elon Musk depicting himself in a bikini, generated by Grok.

"This is industrial-scale abuse of women and girls," chief executive of the CCDH Imran Ahmed told the publication. "There have been nudifying tools, but they have never had the distribution, ease of use or the integration into a large platform that Elon Musk did with Grok."

Grok has come under fire for generating child sexual abuse material (CSAM), following reports that the X chatbot produced images of minors in scantily clad outfits. The platform acknowledged the issue and said it was urgently fixing "lapses in safeguards."

Grok parent company xAI is being investigated by multiple foreign governments and the state of California for its role in generating sexualized or "undressed" deepfakes of people and minors. A handful of countries have even temporarily banned the platform as investigations continue.

In response, xAI said it was blocking Grok from editing user uploaded photos of real people to feature revealing clothing, the original issues flagged by users earlier this month. However, recent reporting from the Guardian found that Grok app users were still able to produce AI-edited images of real women edited into bikinis and then upload them onto the site.

In reporting from August, Mashable editor Timothy Beck Werth noted problems with Grok's reported safety guardrails, including the fact that Grok Imagine readily produced sexually suggestive images and videos of real people. Grok Imagine includes moderation settings and safeguards intended to block certain prompts and responses, but Musk also advertised Grok as one of the only mainstream chatbots that included a "Spicy" setting for sexual content. OpenAI also teased an NSFW setting, amid lawsuits claiming its ChatGPT product is unsafe for users.

Online safety watchdogs have long warned the public about generative AI's role in increased numbers of synthetically generated CSAM, as well as non consensual intimate imagery (NCII), addressed in 2025's Take It Down Act. Under the new U.S. law, online publishers are required to comply with takedown requests of nonconsensual deepfakes or face penalties.

A 2024 report from the Internet Watch Foundation (IWF) found that generative AI tools were directly linked to increased numbers of CSAM on the dark web, predominately depicting young girls in sexual scenarios or digitally altering real pornography to include the likenesses of children. AI tools and "nudify" apps have been linked to rises in cyberbullying and AI-enabled sexual abuse.