Elon Musk’s X platform is facing a deluge of criticism after users began using Grok, the platform’s artificial intelligence technology, to generate sexually explicit photos of real people.
Some are using the app to instruct Grok to remove clothes from pictures of actual human beings — some of which are children.
Now, Congress appears to be getting involved. A group of Democratic senators, including Ron Wyden (OR), Ben Ray Luján (New Mexico), and Ed Markey (MA), sent a letter to Apple and Google demanding that they remove the Grok app from their app stores over X’s “mass generation of nonconsensual sexualized images of women and children.”
The lawmakers pointed out that the platform’s “generation of these harmful and likely illegal depictions of women and children has shown ccomplete disregard for your stores’ distribution terms.”
In recent days, X users have used the app's Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable. What is more, X has reportedly encouraged this behavior, including through the company's CEO Elon Musk acknowledging this trend with laugh-cry emoji reactions. Researchers have also found a Grok app archive reportedly containing nearly 100 images of potential child sexual abuse materials generated since August, in addition to many other nonconsensual nude depictions of real people being tortured and worse. There can be no mistake about X's knowledge, and, at best, negligent response to these trends.
The letter further argues that “Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices” and that “not taking action would undermine your claims in public and in court that your app stores offer a safer user experience than letting users download apps directly to their phones.”
Recommended
I find it grossly hilarious in a dark way, that grok has zero trouble generating pictures of women and little girls with their clothing removed, but it draws the line at this. pic.twitter.com/woDEh4w76g
— Buffy the misogynist slayer (cervix haver) (@Opiumbrella) January 7, 2026
The controversy began when users discovered how to create these sexually explicit images without their targets’ consent. Starting in late December, users started asking Grok to “remove clothes” or “undress” people in uploaded photos, The New York Times reported. The AI would complay, generating fake images showing these individuals in bikinis, lingerie, or other sexualized positions.
Researchers found that Grok was creating about one nonconsensual image per minute at the nadir of the trend.
The situation worsened when it was revealed that some users were creating sexualized images of children using the app. The Internet Watch Foundation, a charity dedicated to fighting child sexual abuse material, discovered images of young girls between the ages of 11 and 13 that were created using Grok.
Musk appeared to make light of the matter, using Grok to create an image of himself in a bikini. However, he did warn that those using the technology to create illegal content would face consequences. On Thursday, Grok restricted image generation only to paid subscribers.

