01.21.2026

When AI Harassment Goes Mainstream: Grok’s Scandal & the “Crisis of Impunity”

Grok, AI chatbot for the social media platform X, is being used to generate non-consensual sexual images of women and children. Despite extensive backlash to the imagery, Elon Musk and President Trump claim any attempts to regulate the site would be an attack on free speech. Charlie Warzel, a staff writer at The Atlantic, is reporting on the controversy. He speaks to Hari Sreenivasan.

Read Transcript EXPAND

CHRISTIANE AMANPOUR: Now, the British government is launching a formal investigation into the social media platform X after the site’s A.I. chatbot, Grok, was being used to generate and spread nonconsensual sexual images of women and children. Despite extensive backlash to the abuse of imagery, Elon Musk and President Trump have claimed that any attempts to regulate the site would be an attack on free speech. Our next guest, The Atlantic staff writer Charles Worzel, tells Hari Sreenivasan why Elon Musk cannot keep getting away with this.

 

HARI SREENIVASAN: Christiane, thanks. Charlie Warzel, thanks so much for joining us. You wrote a piece recently in The Atlantic, and you said, “Elon Musk cannot get away with this…If there is no red line around AI generated sex abuse, then no line exists.” And I wonder what got you so upset about this? You’ve been somebody who’s covered technology companies and trends for years, why does this feel different?

 

CHARLIE WARZEL: Well, it, I think in some ways it’s very self-explanatory, right? This chat bot Grok, which is hooked into Elon Musk’s X. It’s built by xAI, his AI company that is also integrated with formerly Twitter, now X. It started to generate these images of people in bikinis, or people in, you know, cellophane bikinis. So it was, you know, see-through, et cetera. People then started putting, you know, asking the chatbot to put pictures of, you know, seemingly children in there. And it started to become this tool that was, you know, hooked into this social network, and it was used to viralize and weaponize this type of harassment.

 

We’ve seen this — since the arrival of generative AI. We’ve seen that there is a problem with these undressing apps, right? With people taking photos of people and putting them in these compromising situations against their will. We’ve also seen problems with trolling and, you know, a lack of content moderation on platforms like X under Elon Musk, and even on, you know, Facebook and other places. We’ve seen message boards like 4chan that have these, you know, awful people on them who just want to cause chaos, who just wanna hurt people who just want to troll. 

 

What happened on X from, I think it was around December 30th to, you know, very recently, was the combination of all these things. It was like taking 4chan and hooking it up to a popular social network that, you know, politicians, celebrities, brands, maybe you, I used to, post on this network. And it was being used to intimidate and harass at scale. It basically turned child sex abuse material, and just normal — not…like revenge porn style sexual abuse into a meme. And that was something I think was unprecedented. 

 

SREENIVASAN: I wanna read a statement that Grok or X’s safety team, they posted, said, “We have implemented technological measures to prevent the Grok account on X globally, from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers. Additionally, we’ll, geoblock in jurisdictions where such content is illegal, the ability of all users in those locations to generate images of real people in bikinis, underwear, and similar attire in Grok on X.” So are the measures that they’re taking sufficient? Do you see any evidence that it’s actually happening in restricting any of this type of material from existing?

 

WARZEL: I think that the measures are beginning to be sufficient or more sufficient. It…it happened in a se — it happened basically over a series of steps, right? At first X decided that it was going to disable the ability for image generation for people to basically prompt the chatbot, right? This was the weaponized viralized thing, @Grok put her in a bikini. It disabled that, which sort of added one small, small layer of friction. Then the platform said, Okay, we are going to make image generation a paid feature. That in itself was somewhat disturbing, because then it essentially makes, you know, non-consensual AI generated revenge porn, a paid feature on the platform. After I’d say, you know, three or four days of more and more pressure and outrage, finally they took these, these restrictions. There are now many, many fewer instances, I have not really been able to find instances of this being abused in the way that it was.

 

But, if you go to other forums, right? There are Reddit pages, there are other, I won’t, you know, describe where they are, in other corners of the internet. You see people trying as a group to jailbreak Grok and these image generators, trying to get around all those guardrails and trying to figure out the best way to prompt, you know, a chatbot to put someone in these, in these types of compromising situations. So I think that the problem has been addressed for now, but it took basically two weeks of some of the most heinous stuff that I have seen on the internet as a product feature for, for this company to act.

 

SREENIVASAN: And one of your concerns is that while this might have happened in the backwaters of the internet, this was right in the kind of the main bloodstream here, and it, this just weaponized the scale of how fast this information could spread. Am I right?

 

WARZEL: Yeah. It becomes a meme and it becomes a culture, right? There’s, there’s a lot of, there’s always been a problem with, you know, women and vulnerable people being harassed on X and across social media and all kinds of places. And some of that is just the chaos of tons of people being networked together. You’re gonna get bad actors, right? And it’s a whack-a-mole problem, right? 

 

This was a different order of magnitude because it became a game that people were playing. And, you know, Elon Musk has said for a very long time that he is this free speech maximalist. Now, he doesn’t practice that in his actual running of X. But there was this feeling as there is always when there are these trolls and harassers on the platform, that this is just, you know, if you’re gonna make a free speech omelet, you’re gonna have to break some eggs, right? This is just what it looks like in practice. That is fundamentally not the case. That argument is actually wrong. Because as you saw with this, when people were trying to call out this behavior, when people were trying to flag politicians or you know, other tech companies who could pressure Elon Musk or Elon Musk himself, there was an army of trolls who are using this feature to silence women, to basically bully them off the platform. So this is a prime example of the ways where if you don’t have any guardrails or guidelines for content moderation, what you’re actually doing is restricting speech. You are chilling other speech from happening because the trolls and the bullies have so much power.

 

SREENIVASAN: Several countries have decided to push back on what happened over these couple of weeks. Indonesia, Malaysia, they’ve suspended the AI bot. Australia, Canada, France, Malaysia, UK have kind of investigated the creation of all of these sexualized deep fake images. California is looking at an investigation whether or not the Grok bot basically violated the state laws by facilitating the creation of non-consensual sexual images. And I wonder, are any of these things going to be enough of a consequence for Elon Musk or Grok to change the way that they roll out product features?

 

WARZEL: The bans are, you know, interesting, right. People can obviously get around them with VPNs or things like that. I think that when it comes to legislation in the government, or people, you know, like California looking into this, I think what is important in this sense is that people in power or who can put pressure on these companies, care enough to look into it to threaten that action, right? There needs to be some kind of feeling of consequences. Because what’s happened right now is — and I wrote this in the piece — this feeling of this culture of impunity, this crisis of impunity right now, in the second Trump administration, where Donald Trump sort of sets the standard for how a lot of different institutions and politicians and executives and people with power can behave. And there is this feeling that there are no consequences. 

 

There’s too much going on. There’s too much chaos. The zone is so flooded that basically if you can just hang on and not apologize and just move forward, it will get buried by the next avalanche of bad news or outrage or what have you. And you can just kind of keep going and making money and doing the thing that you wanted to do. 

 

And I think what — the reason I wrote this piece, the reason I’m so mad about this, the reason why I do think that it’s, it’s great that, you know, some lawmakers are looking into this, is that this was such an egregious example of somebody running a company so recklessly and hurting all these vulnerable, vulnerable people, that I think this is a moment for us to kind of freeze on something like this and say, people need to suffer some kind of consequences for this. There are young children who were sexually harassed, whose image, you know, was used against their will in a sexual manner. And we can’t just look away from that. We can’t just say, oh, you know, it’s a bug. This is something that we just have to deal with in the age of generative AI. 

 

And if we as a society do give up and just let them run roughshod, I think that we’ve crossed some kind of line, and I don’t think we can claw it back. We can’t — we, we basically then have said, we’ve ceded these platforms to chaos.

 

SREENIVASAN: A statement from Elon that said, “I’m not aware of any naked underage images generated by Grok, literally zero. When asked to generate images, it will refuse to produce anything illegal as the operating principle for grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.” Have they fixed all of these bugs? 

 

WARZEL: Well, to be fair to X they have fixed plenty to try to, you know, respond to some of this outrage as it has really reached a fever pitch. But the idea that Musk is separating in his mind this idea of like what Grok is generating versus what it’s generating when it’s being adversarially hacked, you can’t do that as the owner of a piece of technology, because there is no technology that reaches a certain level of prominence that isn’t adversarially hacked. This is the table stakes of creating a technology that you deploy out to millions of people, is that you have to play this game of whack-a-mole. You have to have teams of people who are willing to protect the users from people who are going to try to break the technology. 

 

SREENIVASAN: At the same time that this is happening the Pentagon announced that Grok would be integrated on “every unclassified and classified network” throughout the Pentagon systems. I mean, should we be concerned on a structural level that a software, rolls out something to the public that’s so bad, and at the same time, it’s going to have access to every military secret that we have?

 

WARZEL: I don’t look at it and say, Grok is bad, right? Grok is, you know, a bully, or Grok is a sexual deviant or something like that. It’s all about the parameters. It’s all about what the people who are programming these models, like the guardrails that they’re instituting, the ways that they are being, you know, prompted behind the scenes to respond, right? And Grok is being prompted to respond in a racier manner, in a sort of no holds barred, no, you know, no censorship manner. 

 

And so my worry is with the people who are in charge of that company. They are making these decisions. Elon Musk, other people at xAI, are making these decisions to have their large language model behave in a very specific way. And that is extremely concerning when you start thinking about the ways that, you know, it could be integrated with the Pentagon. 

 

SREENIVASAN: There is legislation in the United States, Congress has this bipartisan Take It Down Act, right? And that was passed in May of last year. It kind of gives the platforms until May of this 2026 where they have to establish some sort of a notice and a take down procedure. Do you think that this type of legislation will work? I mean, can it keep up with how fast technology’s evolving?

 

WARZEL: Bluntly I don’t think it can. I, and I think some of that too is the craven nature of some of our politicians. Some of our lawmakers, you know, Ted Cruz is a co-sponsor of that bill. He initially — when this was Grok scandal, was first, you know, kicking off — said, this is unacceptable. You know, mentioned his own sponsorship of the bill. And then last week he was, he posted a photo on Twitter of him with his arm around Elon Musk saying, it’s “always great to see this guy.” I mean, I think that says everything right there, right? 

 

Not only are, not only is Congress and legislation up against, as you said, like this pace, this speed of how these things happen, right? You know, as soon as people feel like they may have a handle on, you know, the Web 2.0 version of social media, then you layer in generative AI on top of that, right? And you have, you know, this idea that we were just talking about of like, what is the liability there? Is this, did these companies generate this? Should they be held liable for what, you know, for what their users do with the generative AI tools that they build? You have all of these concerns and the fact that the technology is racing ahead of, you know, what the lawmakers, you know, can imagine and, and, and what the, what the law can do. 

 

But, but also you just have this, this crisis of impunity, as I said, right? If one of the co-sponsors of the bill feels totally valid posting on Elon Musk’s social media platform, basically, you know, touting his friendship with the guy who did not stop this thing from happening, this scandal, this crisis, from happening. What are we doing here?

 

SREENIVASAN: You know, when you broaden this out, there was a U.K. based nonprofit called the Internet Watch Foundation. And in 2024, they found 13 instances of AI generated videos of child sex abuse. In 2025, they found 3,440. And that’s before any of the stuff with Grok that we’re talking about. I mean, if this trajectory continues, I just wonder in ‘26 or ‘27, what percentage of what’s generated by AI goes from, oh, look, you know, it’s funny cat videos, to this stuff?

 

WARZEL: The truth is, it’s gonna go up if we let it, that number is going to go up, if as a society, as a, you know, culture in places like Silicon Valley, the people who make these tools, the politicians, the watchdogs, et cetera, the press, everyone, if we allow it to, if we let these instances slip by and people don’t lose their jobs, or these companies don’t face significant repercussions, and I’m not talking about slap on the wrist, fine. It’s going to go up. And it’s going to go up in a way, that’s incredibly concerning if you are a person with a conscience or a parent or, you know, like truly anyone who has a sense of moral compass because I’ve been in these communities to report on them at great, like psychological turmoil to myself, and watched these anonymous people bring up innocent photos taken from Instagram and say, Hey, can you put this person in this thing? A daughter, a son, a, you know, a younger person, what have you. It’s disgusting. It’s despicable. And if we do not draw the reddest of red lines around this AI generated non-consensual sex abuse material of minors — but also just, you know, people of age — if we do not culturally just say, this is poison. This is cancerous, this needs to be excised, this behavior needs to be treated the same way we treat that material when it’s not AI generated and it’s out in the world. If we don’t do something about that, that number’s going to go up. And real people are going to be devastated by that. And I just think that those are the stakes of this entire scandal.

 

SREENIVASAN: Staff writer at The Atlantic, Charlie Warzel. Thanks so much for joining us.

 

WARZEL: Thank you for having me.

 

About This Episode EXPAND

Finnish President Alexander Stubb reacts to Donald Trump’s address from Davos. Former White House Trade Adviser Kelly Ann Shaw explains Donald Trump’s employment of tariff threats to other countries. Atlantic staff writer Charlie Warzel breaks down the controversy over Elon Musk’s AI chatbot Grok being used to generate non-consensual sexual images of women and children.

WATCH FULL EPISODE