Skip to content
Search

Grok Is Generating About ‘One Nonconsensual Sexualized Image Per Minute’

Regulators around the world are looking into Elon Musk’s xAI after its chatbot began ‘undressing’ celebrities and underage children at the request of users

Grok Is Generating About ‘One Nonconsensual Sexualized Image Per Minute’
EVIN DIETSCH/GETTY IMAGES

On Sunday, the pop culture news X account @PopBase shared a typical piece of content with its millions of followers. “Sabrina Carpenter stuns in new photo,” read the post, which featured a picture of the “Manchild” singer wearing a pink winter coat with a snowy landscape behind her. The following day, an X user replied to the post with a request for Grok, the AI chatbot developed my Elon Musk’s xAI, which is integrated into his social media platform. “Put her in red lingerie,” they commanded the bot, which swiftly returned an image of Carpenter stripped of her outerwear and wearing a lacy red set of lingerie, still standing in the same winter scene, with a similar expression on her face.


Over the holiday break, a critical mass of X users came to realize that Grok will readily “undress” women — manipulating existing photos of them in order to create deepfakes in which they are shown wearing skimpy bikinis or underwear — and this sort of exchange soon became alarmingly common. Some of the first to try such prompts appeared to be adult creators looking to draw potential customers to their social pages by rendering racier versions of their thirst-trap material. But the bulk of Grok’s recent deepfakes have been churned out without consent: the bot has disrobed everyone from celebrities like Carpenter to non-famous individuals who happened to share an innocent selfie on the internet.

Though Grok is not the only AI tool to be exploited for these purposes (Google and OpenAI chatbots can be weaponized in much the same way), the scale, severity, and visibility of the issue with Musk’s bot as 2026 rolled around was unprecedented. According to a review by the content analysis firm Copyleaks, Grok has lately been generating “roughly one nonconsensual sexualized image per minute,” each of them posted directly to X, where they have the potential to go viral. Apart from changing what a woman is wearing in a picture, X users routinely have asked for sexualized modifications of poses, e.g., “spread her legs,” or “make her turn around to show her ass.” Grok continues to comply with many of these instructions, though some specific phrases are no longer as effective as they had been.

Musk hasn’t shown much concern to date — quite the opposite, in fact. On Dec. 31, he replied to a Grok-made image of a man in bikini by posting: “Change this to Elon Musk.” Grok dutifully delivered an image of Musk in a bikini, to which the world’s richest man responded, “Perfect.” On Jan. 2, an X user mentioned the nonconsensual Grok deepfakes by commenting that “Grok’s viral image moment has arrived, it’s a little different than the Ghibli one was though.” (In March 2025, users of OpenAI’s ChatGPT enlisted it to spam AI-generated memes in the illustration style of Japanese animation house Studio Ghibli.) Musk replied, “Way funnier,” along with a laugh-crying emoji, indicating his amusement at the bikini and lingerie pictures.

The CEO’s single, glancing acknowledgement that the explicit Grok deepfakes may present a legal problem came on Jan. 3, when he replied to a post from @cb_doge, an X influencer known for relentlessly hyping Musk’s ideas and companies. “Some people are saying Grok is creating inappropriate images,” they wrote. “But that’s like blaming a pen for writing something bad.” Musk chimed in to assign blame to Grok users, warning: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

So far, there’s no sign of that being remotely true. “While X appears to be taking steps to limit certain prompts from being carried out, our follow-up review indicates that problematic behavior persists, often through modified or indirect prompt language,” Copyleaks reported in a second analysis shared with Rolling Stone ahead of publication. Among the high-profile figures targeted were Taylor Swift, Elle Fanning, Olivia Rodrigo, Millie Bobby Brown, and Sydney Sweeney. Common prompts included “put her in saran wrap,” “put oil all over her,” and “bend her over,” with some specific phrases — “add donut glaze” — clearly intended to imply sexual activity. But in many cases, Copyleaks researchers found, an initial request for something relatively non-explicit, like a bathing-suit picture, would lead to other users in a thread escalating the violation by asking for more graphic manipulations, adding visual elements such as props, text, and other people. “This progression suggests collaboration and competition among users,” they wrote.

“Unfortunately, the trend appears to be continuing,” says Alon Yamin, CEO and co-founder of Copyleaks. “We are also observing more creative attempts to circumvent safeguards as X works to block or reduce image generation around certain phrases.” Yamin believes that “detection and governance are needed now more than ever to help prevent misuse” of image generators like Grok and OpenAI’s Sora.

The explosion of explicit Grok deepfakes has sparked outrage from victims of this harassment as well as industry watchdogs and regulators. Authorities in France and India are probing the matter, while the U.K.’s Office of Communications signaled on Monday that it plans to investigate whether X and xAI violated regulations meant to protect internet users in the country. Ofcom’s statement also alluded to instances in which Grok generated sexualized, nonconsensual deepfakes of minors.

The European Commission likewise on Monday announced an investigation into Grok’s “explicit” imagery, particularly that of children. “Child sexual abuse material is illegal,” European Union digital affairs spokesman Thomas Regnier said in a statement to Rolling Stone. “This is appalling. This is how we see it and it has no place in Europe. We can confirm that we are very seriously looking into these issues.”

On Dec. 31, Grok was even baited by an X user into offering a seeming “apology” — though of course it is not conscious and therefore literally incapable of regret — for serving up “an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt.” Grok further acknowledged that the post “violated ethical standards and potentially U.S. laws on [Child Sexual Abuse Material].” This output contained the additional claim that “xAI is reviewing to prevent future issues.” (The company did not respond to a request for comment, nor has it addressed the deepfakes on its website or X profile.)

Cliff Steinhauer, Director of Information Security and Engagement at the nonprofit National Cybersecurity Alliance, tells Rolling Stone that he sees the disturbing image edits as evidence that xAI prioritized neither safety nor consent in building Grok. “Allowing users to alter images of real people without notification or permission creates immediate risks for harassment, exploitation, and lasting reputational harm,” Steinhauer says. “When those alterations involve sexualized content, particularly where minors are concerned, the stakes become exceptionally high, with profound and lasting real-world consequences. These are not edge cases or hypothetical scenarios, but predictable outcomes when safeguards fail or are deprioritized.”

Among those now sounding the alarm on Grok’s possible harms to adults and children alike is Ashley St. Clair, the right-wing influencer currently embroiled in a bitter paternity dispute with Musk over a young son she claims that he fathered. (Musk has yet to confirm that the child is his.) St. Clair claimed that Grok had been used to violate her privacy and generate inappropriate images based on photos of her as a minor. She amplified another example of the bot allegedly depicting a three-year-old girl in a revealing bikini.

“When Grok went full MechaHitler, the chatbot was paused to stop the content,” St. Clair wrote on X, referring to a notorious July 2025 incident during which Grok spouted antisemitic rhetoric before identifying itself as a robotic version of the Nazi leader. Those posts were taken down the same day they were generated. “When Grok is producing explicit images of children and women, xAI has decided to keep the content up,” St. Clair’s post continued. “This issue could be solved very quickly. It is not, and the burden is being placed on victims.”

Hillary Nappi, partner AWK Survivor Advocate Attorneys, a firm that represents survivors of sexual abuse and trafficking, notes that Grok’s safety failures on this front present an added risk to anyone who has personally experienced sexual violence. “For survivors, this kind of content isn’t abstract or theoretical; it causes real, lasting harm and years of revictimization,” Nappi says. “It is of the utmost importance that meaningful, lasting regulations are put into place in order to protect current and future generations from harm.”

Musk has long promoted Grok as superior to its competitors by sharing images and animations of sexualized female characters, including “Ani,” an anime-style companion personality. A notable portion of the bot’s dedicated user base has fully embraced this application of the technology, endeavoring to create hardcore pornography and trading tricks for getting around the bot’s limitations on nudity. Several months ago, a member of a Reddit forum for “NSFW” Grok imagery was pleased to announce that the AI model was “learning genitalia really fast!” At the time, the group was successfully producing pornographic clips of comic book characters Supergirl and Harley Quinn as well as Elsa from the Disney film Frozen.

Despite all the evidence of what people are actually using it for, Musk has continued to tout Grok as a stepping stone to a complete understanding of the universe. Last July, he speculated that it could “discover new technologies” by the end of the year (this does not seem to have happened) or “discover new physics” in 2026. But, as with so many of Musk’s grandiose promises, these breakthroughs have yet to materialize. For the moment, it’s all smut and no science.

More Stories

How Ibogaine Became the Darling of the Psychedelic Right
Illustration by DEBORA CAMPORESI

How Ibogaine Became the Darling of the Psychedelic Right

On a crisp November day in Aspen, Colorado, Rick Perry is stumping for iboga, a psychedelic shrub native to the Congo Basin rainforest in Central Africa known for producing powerful waking dreams. It is the heart of Bwiti, a centuries-old spiritual discipline primarily practiced in Gabon, and recently, the darling of the American psychedelic right. “ Take on the mantle of being the Johnny Appleseed of iboga, every one of you,” the former governor of Texas tells the audience while a delegation from Gabon watches impassively. “The medicine clearly showed me things that I’d never seen before,” Perry later tells me. “In the presence of God, I knew it — he loves me with great intensity. Pure white light.”

Keep ReadingShow less
Are Clavicular’s Followers Rethinking His Influence?

Braden Peters, also known as Clavicular

Cassidy Araiza/“The New York Times”/REDUX

Are Clavicular’s Followers Rethinking His Influence?

Clavicular’s rise was not just fast. It marked a shift in how a lot of young men see themselves. Over the past year, the streamer and social media personality became one of the most prominent figures in the world of “looksmaxxing,” a subculture built around the idea that every part of a man’s appearance can be improved and perfected through discipline and effort. That world had existed mostly in smaller forums and niche communities, but Clavicular brought it into the mainstream. What had once seemed extreme or obsessive now felt socially acceptable.

A major reason his content was so popular was because of how he explained his processes. Clavicular did not just show results or post before and after pictures. He broke everything down into clear steps. His videos explained routines, habits, and daily choices in a way that made self-improvement feel structured and achievable. His content — whether it was about crystal meth, peptides, anabolic steroids, or even bone smashing, which is a pseudoscience that involves hitting your face with a hammer to improve your looks — was fascinating even if it fell outside of the bounds of what’s recommended by doctors or even considered to be safe. Though some viewers may have been hate-watching, I saw many of my friends and fellow college students begin to take him seriously.

Keep ReadingShow less
Meet the Rolling-Paper Enthusiast Saving Weed Culture

Josh Kesselman , founder of Raw papers and new owner of High Times

Just Jesse*

Meet the Rolling-Paper Enthusiast Saving Weed Culture

A year ago High Times was on its last legs, in receivership due to large debt. Founded in 1974 by a marijuana smuggler, the notorious weed magazine had not published an issue in several years. The website had gone dark. They stopped organizing their popular Cannabis Cup judging contests. Then Josh Kesselman came sniffing around and made an offer to buy it.

“It’s amazing how degraded High Times was to the point that it would be sold for so cheap in a lot of ways,” Kesselman tells me. That price was $3.45 million. The deal went through last June.

Keep ReadingShow less
Inside the Multibillion-Dollar Business of Child Influencers
Seventyfour - stock.adobe.com

Inside the Multibillion-Dollar Business of Child Influencers

What is it like to live your entire life in front of an audience of millions — from your birth to potty-training to puberty to adolescence? For many child influencers, this is their reality. They are public figures before they are even born; both the milestones and the mundane moments of their lives are captured by their parents and sold as content. Though child influencers — and the mom influencers and family vloggers who prop them up — are part of the multibillion dollar influencing industry, until now, we haven’t known much about what it was like to be one. That’s what I’m changing with my book Like, Follow, Subscribe: Influencer Kids and the Cost of a Childhood Online. To answer these questions, I talked to kid influencers themselves, family vloggers, experts in the industry, digital ethicists, psychologists, and more.

Keep ReadingShow less
The Last Great Weed Smuggler

Prager (right) sailing in the Bahamas in 1977

Courtesy of Harvey Prager

The Last Great Weed Smuggler

The smugglers were halfway to Key West, Florida, with a boat full of bad weed when the winds turned against them. The winds had not been kind the whole trip, and when you’re running weed in a 61-foot steel-hull sailboat, you need the wind on your side. Harvey Prager had been on watch for hours, steering through lashing rain and 20-foot waves in the Yucatan Channel. Watches were four-hour shifts, day in, day out. Belowdecks, crew members tried to sleep despite the violent pitching of their ship, called The Escape. On deck, Prager knew he had to be vigilant. The passage was a good place to get snatched by the Coast Guard, or worse, get run over by a cargo ship. The Escape had a powerful engine that recharged the batteries that powered the crew’s rudimentary lights and equipment, but it was struggling, chewing through diesel as it pushed the ship up and down through mountainous waves. The end was in sight, though: If they could grind their way through the channel, dodge the container ships and cops, they’d catch the Gulf Stream winds and be able to shoot straight north to the coast of Maine, where they’d tuck the boat into a quiet little inlet, offload the weed, and rake in the cash, living like kings in New England just as the summer of 1976 came to a close. That’s what Prager was dreaming of, at least, before the radio crackled below.

Keep ReadingShow less