Grok’s Leering Pictures Are the Newest Version of an Old Problem … from Mother Jones Anna Merlan

There’s a picture of myself that I had saved on my desktop for years; I suppose we could call it a caricature. A little more than a decade ago, someone on a Nazi messageboard pulled a photo of me from social media, then updated it with some antisemitic flair: a little cartoon rat sitting on my shoulder, a yellow Judenstern pinned on its tiny body. Referencing what Jews were forced to wear during the Holocaust is meant to be a humiliation; the goal isn’t hard to figure out, given that the whole star patch thing is near-medieval in both its imagery and its aims. Unfortunately for the messageboard user, the rat was adorable, making the overall effect of the illustration really, really cute—like I had a lovable ratty little sidekick. I kept the image for a long time, until I eventually lost it to the sands of time and the need to clean my computer’s desktop.

“Generative AI has fueled a surge in deepfake abuse.”

Of course, there have been other, much worse, manipulated images of me out there, which I’m deliberately not describing because it would probably please their creators. As long as the social internet has existed, some of its users have wanted to deface, sully, and degrade images of women. The methods used to effect that outcome range from the slapstick—hello, Herr Rat—to the truly vile. When I first started working at the feminist website Jezebel, a semi-common practice from troll messageboard users was to masturbate on a writer’s photo, then email her a picture of the results. In 2014, the site dealt with a barrage of disgusting and graphic photos in our comments, often featuring pictures of female corpses. The same year, scores of celebrity nude photos were hacked and leaked online, with a subreddit dedicated to sharing the photos left up for almost a full week—making violation of both the living and the dead an ongoing theme. 

Websites that produce deepfake nude images have also existed for several years. In 2024, a Guardian investigation found that these sites contained faked images of thousands of celebrities and other women, which were then often uploaded to porn sites. The problem was clearly snowballing, the paper wrote: “In 2016, researchers identified one deepfake pornography video online. In the first three-quarters of 2023, 143,733 new deepfake porn videos were uploaded to the 40 most used deepfake pornography sites—more than in all the previous years combined.” 

And now, of course, there’s Grok, the generative AI tool created by Elon Musk’s X, which has been embroiled in a scandal over users using its new graphics-editing feature to create gross images of women—and, crucially, to put those manipulated images on Twitter, where other people can see them, because attempted public humiliation is the goal. Those manipulated images have reportedly included one of the dead body of Renée Nicole Good, the woman killed by ICE agent Jonathan Ross in Minneapolis, slumped over the wheel of her car, in a bikini. 

All of this adds up to a chilling and global picture, experts say, one sometimes referred to as “technology-facilitated gender-based violence.” And while the problem is very old, generative AI is making things much worse, said Kalliopi Mingeirou, the chief of UN Women’s Ending Violence Against Women section. 

“Generative AI has fueled a surge in deepfake abuse,” she told me in a statement, “with women comprising the overwhelming majority of victims.” Mingeirou added that a December 2025 UN Women report found that almost one in four women working as human rights defenders, activists, or journalists had “reported experiences of AI-assisted online violence.” 

“Urgent regulation and safety-by-design are essential,” Mingeirou added, “to ensure AI advances women’s rights rather than undermining their safety and participation.”

“Many of the victims are feminists who dare criticize the phenomenon.” 

Carrie Goldberg, an attorney who often represents victims of sexual abuse, trolling, stalking, and revenge porn, says deepfaked images come up regularly in her practice. The earliest iteration were images of actresses being turned into deepfaked porn, but the problem didn’t stop there. Today, “the main sets of victims coming to us are kids deepfaking kids,” she told me, “and then anonymous trolls creating deepfakes to sextort their target into giving them actual nudes. I know of one case on Discord where a targeted child was coerced down a very dark road that began with somebody threatening her with a deepfake he’d made.”

“We have observed popular online personalities getting targeted a lot,” Goldberg adds, “and of course this recent spate on X, many of the victims are feminists who dare criticize the phenomenon.” 

In the United States, there are both state and federal laws designed to address the harms caused by “nonconsensual intimate imagery” (NCII), another common term to describe both real nudes and deepfakes distributed online. In March 2025, President Trump signed the “Take It Down Act,” a bipartisan bill criminalizing the distribution of NCII and requiring platforms to remove such images within 48 hours of a victim’s request. The bill was introduced by Senators Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.), after an incident in Texas where a high schooler took images of his classmates, manipulated them to appear nude, and posted them on Snapchat. California’s AB 621, also passed last year, strengthen the legal enforcements available against people who distribute—and not just create—deepfake images. 

“The law has been dealing with false information and bad information for a long time,” points out David Greene. He’s the senior counsel at the Electronic Frontier Foundation, the preeminent digital rights and privacy group in the United States. “We have structures in place for balancing the competing free speech and harm interests when dealing with false information.” 

But law isn’t the only response. It’s always advisable, Greene says, “to urge the companies to look for tech solutions… some way they can do something to make it harder to make these images.” Other AI image generators, he points out, “do have filters in place,” more than Grok seems to. 

When it comes to tech companies fixing the very problems they’ve created, Greene doesn’t see “relying on their good faith” as the sole solution. “As with any bad actions by a company, consumers and users only have so many points of leverage,” he says. “People fleeing the site, or other ways of exerting financial pressure on the company, those will probably be more effective than trying to appeal to [Musk’s] feelings, and certainly to his ethics.” 

With sexualized deepfakes, Greene adds, it’s important not to lose sight of why they’re being created in the first place. If “women tend to disproportionately be victims, then it’s probably part of a larger sociological phenomenon,” he says, of harm to women not being “evaluated as being as harmful as it actually is.”

True to form, tech companies aren’t leading the way to a less disgusting future. In recent tweets, Musk clarified that Grok should, as he put it, “allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV.”

In a cryptic post, Musk also recently declared that Grok should “have a moral constitution.” He didn’t elaborate.

 Read More