
Ethics of Artificial Intelligence
Clip: Season 6 Episode 5 | 8m 17sVideo has Closed Captions
Explore the ethics behind artificial intelligence with Reid Blackman of Virtue Consultants
We explore the ethics behind artificial intelligence with Reid Blackman of Virtue Consultants.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Nevada Week is a local public television program presented by Vegas PBS

Ethics of Artificial Intelligence
Clip: Season 6 Episode 5 | 8m 17sVideo has Closed Captions
We explore the ethics behind artificial intelligence with Reid Blackman of Virtue Consultants.
Problems playing video? | Closed Captioning Feedback
How to Watch Nevada Week
Nevada Week is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipSo Reid, the 2024 elections are upon us.
Will you explain to our viewers what deepfakes are and how likely it is that they'll be coming across them during this election cycle.
(Reid Blackman) So deepfakes are just generated images or video using a tool called AI that, to the casual observer, is going to look like it's the real thing.
So recently, for instance, there's a picture of the Pope wearing a very expensive puffer jacket, but it was fabricated.
That wasn't an actual photo that anyone took.
It was something that someone made using generative AI.
And you can do the same things with political candidates--put them in compromising positions, put them in context in which they wouldn't want to be seen--and unsuspecting viewers can think, oh, that Presidential candidate really is in that context doing that thing, which could obviously be a very dangerous thing.
-How likely is it that Americans are going to be facing these during this election cycle?
-It's hard to say, but probably quite likely.
And there's really three sources of threats: You've got opposing political parties can unleash this tactic on each other.
You have state actors.
So say Russia or China generating false images and videos of Presidential candidates to cause disruption.
And then you have just lone individuals acting without the explicit direction of any political campaign or state authority.
Just they have an access to generative AI.
Most everyone does now if you have a computer and an internet connection, and they can create misinformation.
-How prepared do you think Americans are to discern what's real?
-Roughly zero.
There's basically nothing.
There's very little out there to help viewers understand what they're looking at.
The thing that's been floated the most is what's called a watermark.
So people are familiar with watermarks.
It shows us something is the ownership of someone or something.
If you go online now, you get an image.
It might say-- the image might have a Getty watermark on it to indicate this image is owned by Getty.
We don't have that yet for generative AI.
People are working on it.
But even that's not going to be, I don't think, a very good solution.
-The quality of these deepfakes, how-- how high is it?
I mean, how hard is it to tell if something is fake?
There was a time when you could kind of tell, right?
-In some cases you can tell.
You can do a bad job.
I mean, it's a tool.
And some people are going to reel that tool really well.
And some of the people will use it really poorly.
But we're not worried about the people who are doing it poorly.
We're worried about the percent of people who could do it well.
And the truth is, it doesn't take a lot of people in order to create things that are realistic looking.
And then the real worry is not just that people create these things, but then you have people that are online influencers who take that, retweet it, reshare or whatever, and then their entire network sees it.
So it's not just the fact that you can create these more or less high quality fake images.
It's also that they can be spread by people with various political motives.
-So you talked about the watermark.
That's something you can look for to determine whether it's legit or not.
What are some other solutions, and how far off are they?
-We don't have it, as far as I know.
The watermark solution is the go-to solution by technologists.
The idea being if you see an image and then you see that it says generated by AI, or something along those lines, the idea or the theory is supposed to be that then the viewers will say, oh, okay, this is not real, and discard it.
To my mind, though, that's not what's actually going to happen.
What will probably happen is people will see the images.
Even though they know it's fake, they'll have an emotional reaction to it.
So when you go see a movie, you read a book, you're crying when those people fall in love, you're crying when that person died, but you know it's not real.
It doesn't take reality to engage our emotions.
And a lot of our decisions are emotionally driven.
Do we-- do we feel like that person is trustworthy, for instance, so that we would vote for them?
The problem that I see is that you can be transparent or there can be watermarks about what is fake and what's real, but the emotional impact will already be had, which will also impact decision making.
-So a fake image can impact a person perhaps subconsciously?
-All the time.
You've probably cried at a Pixar movie.
-Uh-huh.
-You know it's not real.
It's cartoon.
It's still emotional.
So one of the concerns is that, yeah, we're gonna get emotional.
Our emotions will still engage with the content, even though we know it's not true.
-I want to move to chatbots now.
A lot of people will talk about ChatGPT.
A recent article from the Associated Press read, quote, Spend enough time with ChatGPT and other artificial intelligence chatbots, and it doesn't take long for them to spout falsehoods.
How accurate is that statement?
-Yeah, that's accurate.
I mean, it's pretty frequent that these what are called "large language models" or "chatbots," output, say, so to speak false information.
In fact, when some of the biggest companies released them and during their demos of showing the world what these things can do, they were saying false things.
Even if it was something like the length of the electrical cord attached to the toaster oven or something along those lines.
Or the price of the product.
Those are relatively low stakes.
There are higher stakes.
For instance, there are doctors using these kinds of things to consult with the chatbox about diagnosing patients.
That's obviously very high risk.
And, yeah, they're gonna output false information, because they're not drawn from a well of knowledge.
They're just drawn from the well of stuff that's on the internet.
There's lots of false stuff that's on the internet, so you're gonna get lots of false outputs.
-Also in that article, Microsoft Co-Founder Bill Gates was quoted as saying, "I'm optimistic that over time, AI models can be taught to distinguish fact from fiction."
Do you share in his optimism?
-One question is: What does he mean by "over time"?
You know?
There's five months, and there's 50 years.
What's the time frame that we're talking about?
There are various techniques.
I think it's really too early to say how optimistic or pessimistic we should be.
There are lots of technologists pouring lots of time, energy, money, intelligence into solving what gets called the "hallucination problem."
That's the problem of outputting false information.
And they're making some progress here and there, but it's a really hard problem.
I don't see a solution to it anytime soon.
But maybe we'll get really surprised.
Unlikely.
-I was wondering the same thing.
"Over time," how much time do you think it'll take?
We don't know.
-A century?
I don't know.
-And you don't know, either?
-I don't, no.
-And you don't want to estimate?
-No, I wouldn't put an estimate on it.
I mean, there are very-- what's happening is that technologists are trying to train it for very particular purposes so that you can use a chatbot for "this" thing.
Not all the things, just this thing.
And then maybe it's going to be reliable in this thing.
So for instance, Bloomberg, the financial company Bloomberg, news company, Bloomberg, they have apparently trained their own chatbot on just their data that they trust.
The idea being that the false outputs will massively decrease.
And from what I gather, that's what's happening, although there's not a lot of independent verification of that.
So there might be better chatbots that put out less false information, but it's gonna be very use case specific, very purpose driven, not, obviously, general purpose, which is what everyone has access to now, just use it for whatever you want.
-What do you use it for?
-So I've used it to generate stories to tell my child at nighttime because I somehow got into this routine of telling her a story about a teacher who defends the school from a threat that has a superpower or can turn into some object.
After doing this for like a year, I've run out of ideas.
And so I turned to GPT, and it gave me some really good ideas.
-You're using-- -Low stakes.
-Yes, low stakes.
-Yeah.
-Lastly, your advice to people who are using ChatGPT or other chatbots?
-Well, it depends on the context.
But, generally, you have to understand that it's not a reliable source of information.
It will speak very confidently.
So there's a way in which it has a personality of being confident, and that might fool you into thinking, oh, it knows what it's talking about.
But in fact, the confidence is misplaced.
It outputs false information.
It can be obstinate.
So you say, "I think you're wrong."
And it says, "No, I'm right.
I know I'm right."
So it really, if there's one warning, it's don't take this as an encyclopedia source of knowledge, an authoritative source of information.
It's just not that-- it's not that yet.
-Reid Blackman, Author of Ethical Machines, thank you for your time.
Artificial Intelligence and Legal Services
Video has Closed Captions
Clip: S6 Ep5 | 6m 23s | Joshua Browder of DoNotPay explains how AI can be used for people seeking legal services. (6m 23s)
Video has Closed Captions
Clip: S6 Ep5 | 10m 30s | Amber Renee Dixon explores Ai4 with co-founder Michael Weiss. (10m 30s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship
- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Nevada Week is a local public television program presented by Vegas PBS

