Musk's Grok AI faces more scrutiny after generating sexual deepfake images

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

Amna Nawaz:

Elon Musk was forced to put more restrictions on his social media platform X and its A.I. chatbot, Grok, this week after its image generator sparked outrage around the world.

As Liz Landers explains, Grok was and still is creating nonconsensual sexualized images, prompting some countries to ban the bot.

Liz Landers:

Amna, Musk finally began bowing to pressure this week and announced that X will use geo-blocking to prevent Grok from creating deepfake images of people in revealing swimsuits, underwear, and other clothing in places where the law prohibits it.

But the move has not stopped the stand-alone app known as Grok Imagine from generating explicit images. The late changes have not appeased regulators, and now the governments of Malaysia, Indonesia, and the Philippines have banned the chatbot altogether. Britain and Canada have launched probes into Grok and the possibility of tougher penalties for Musk are on the table.

To help us understand more about Grok's troubles and why they persist, I'm joined by Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence:

Riana, thank you for joining us this Friday.

Riana Pfefferkorn, Stanford Institute for Human-Centered Artificial Intelligence: Thank you for having me.

Liz Landers:

I want to start, with what does this latest series of problems with Grok, these sexually explicit nonconsensual images, what does this tell us of the safety of women and minors on the Internet?

Riana Pfefferkorn:

Well, it illustrates that having your image online or taking a view while you're just out in public living your life is no longer safe from being manipulated in order to depict you in a humiliating and harassing context in which you never appeared in real life.

That's irrespective of whether you yourself personally may have an account online, since other people could post pictures of you or of your child even if you don't have the account on X or on Grok.

Liz Landers:

Just yesterday, Ashley St. Clair, who is the mother of one of Elon Musk's children, she sued Grok, alleging that it was negligent and allowed users to post deepfakes of her in explicit poses even after she complained to the company.

Here's what she told CBS.

Ashley St. Clair, Plaintiff:

Grok said, I confirm that you don't consent. I will no longer produce these images. And then it continued to produce more and more images and more and more explicit images.

Liz Landers:

How are these images bypassing Grok's safety systems? How is this legal?

Riana Pfefferkorn:

So it's a great question. I don't have visibility into what Grok's internal safety systems are.

It sounds like gradually, in response to regulatory and public pressure, they have been trying to institute more safeguards. But it's really difficult to implement effective safeguards against various kinds of unwanted content.

As we can see playing out from Grok's own users, users are very creative in how they try to get around any guardrails that may have been built in order to continue to generate the kind of content that, even in good faith, a platform may be trying to inhibit its model from producing.

Liz Landers:

Grok has had other problems. In the past year or so, there was antisemitic tropes that it was posting. It even praised Hitler. What is the sense in Silicon Valley and in the tech community about why Grok is acting this way and cannot get ahold of itself?

Riana Pfefferkorn:

You know, that's a complicated question.

I would suspect that some part of it may have to do with what training data has gone into the model. It may be that there isn't child abuse imagery directly underlying the model here for Grok, but it might be that it was trained on extremist or Nazi and white supremacist material. So that might go to account for it.

And I will note that xAI filed a lawsuit shortly before New Year's trying to enjoy a California law that has just gone into effect that would require A.I. companies to transparently release a summary of their data training sources.

Liz Landers:

You wrote a New York Times op-ed a few days ago. It said So, There's one easy solution to the A.I. porn problem."

In a nutshell, what would that be? What is the solution here?

Riana Pfefferkorn:

Well, I'm not sure that it's as easy as the headline suggests.

Nevertheless, what I argue in the op-ed for "The Times" is that A.I. researchers and A.I. model developers need what we would call a safe harbor in the law to enable them to better test image generation models for their capacity to produce potentially illegal content without themselves fearing prosecution for trying in good faith to better safeguard those models.

Liz Landers:

Yes, I thought that was particularly interesting. Can you talk a little bit about what that means, those red teams and how A.I. researchers basically work on this right now?

Riana Pfefferkorn:

So red-teaming is the practice of basically trying to act like a malicious user would and try and attack your model every which way to see if you can figure out what exploits may be latent, what loopholes are there, and then you can try and close those holes in order to make the product safer and keep actual bad actors from misusing those potential loopholes.

The problem with illegal imagery in particular is that there's no exception or defense in the law for research or testing activities. And so we face a situation where the people who are developing and testing these models know that the malicious actors are going to try every which way to exploit those loopholes and aren't constraining themselves,but they themselves have to operate effectively with one hand tied behind their backs.

Liz Landers:

The Department of Defense announced that it's going to start using Grok after Secretary Hegseth announced this partnership earlier this week. Does this raise concerns with you either from a national security perspective or from a personnel perspective?

Riana Pfefferkorn:

I think from both.

For one thing, I do think that the Department of Defense should answer for why taxpayer dollars are going towards what has become a notorious nonconsensual deepfake pornography generation machine.

In addition, it seems like there might be ways that either these sorts of misbehaviors that are showing up within Grok or other potential unknown exploitable problems with Grok might be leveraged against American national security once this product is fully integrated into even classified Pentagon servers.

Liz Landers:

Riana Pfefferkorn, thank you so much for joining the "News Hour" this evening.

Riana Pfefferkorn:

Thank you.

Listen to this Segment