
Artificial Intelligence: Where it’s at and where it’s going
Season 6 Episode 5 | 26m 46sVideo has Closed Captions
We’re looking at the future of Artificial Intelligence at this year’s Ai4 Conference
The Ai4 Conference is in Las Vegas, giving attendees an up close look at artificial intelligence, and where it’s heading. Amber Renee Dixon speaks to the experts on how AI can be used in everyday lives. And with the Hollywood strike and an upcoming election, we ask the experts about the ethics of AI.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Nevada Week is a local public television program presented by Vegas PBS

Artificial Intelligence: Where it’s at and where it’s going
Season 6 Episode 5 | 26m 46sVideo has Closed Captions
The Ai4 Conference is in Las Vegas, giving attendees an up close look at artificial intelligence, and where it’s heading. Amber Renee Dixon speaks to the experts on how AI can be used in everyday lives. And with the Hollywood strike and an upcoming election, we ask the experts about the ethics of AI.
Problems playing video? | Closed Captioning Feedback
How to Watch Nevada Week
Nevada Week is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipArtificial intelligence... what it is and where it's going.
That's this week on Nevada Week.
♪♪♪ Support for Nevada Week is provided by Senator William H. Hernstadt.
-Welcome to Ai4.
-And welcome to Nevada Week.
I'm Amber Renee Dixon joining you from Ai4, an artificial intelligence conference at the MGM Grand.
AI is making headlines these days for its capabilities, as well as its shortcomings, which we'll discuss.
But first, let's start with what is AI.
And for that, we bring in Michael Weiss, Co-Founder of Ai4.
Michael, welcome.
(Michael Weiss) Thanks.
Good to be here.
Hey, everyone.
-For those who are unfamiliar with AI, how would you explain it?
-AI is getting a computer to do tasks that previously only a human could do.
That's AI.
-But computers could do some human tasks, right?
-They could.
And I guess AI is sort of an aspirational idea that we can one day get computers to do all that humans can do and, eventually, even more.
-Now, that sounds scary for some people, right?
-It is scary for some people.
And there is a very real potential that if we don't build AI thoughtfully, it could be harmful to humans.
-And that is something we will be discussing ahead.
But let's talk about the most common uses of AI right now.
What are they?
-Yeah, I mean, you see AI being used a lot in customer service, to automate, you know, chatting on websites or phone calls.
You're starting to see more and more AI being used in cars.
Like Tesla cars actually have a lot of AI that helps them drive.
You're seeing AI be used in healthcare to develop new medicines and discover new drugs.
You're seeing AI also in healthcare to read medical images, like X-rays or MRIs.
So it's really-- you know, this conference, we cover, I think, 35 different industries.
And there's literally thousands of examples now of where AI is actually being used in the real world.
-And tell me about some of the ways that excite you that are being developed right now.
-I am excited by AI and education.
One example is if you have a classroom of, say, 30 students and you want to teach them a topic, it could be algebra, each student isn't at the same level in terms of their learning of, you know, algebra.
And so AI can actually create personalized problems to each student and provide each of the 30 kids with the perfect problem to pull them along their learning journey, which is a really powerful idea.
-That's very exciting.
I also want you to talk about what chatbots are.
I think people here ChatGPT a lot these days and may not know what exactly it is.
What is it?
So ChatGPT is-- it's called a large language model, which is probably confusing, but it's just a chatbot.
And it's a text-based interface where you can just type and literally have a conversation with an AI.
So instead of, you know, chatting with your friend or family member, you're just saying, "Hello."
And the response, "Hello.
How are you?"
comes from an AI.
-How do you use it, personally?
-I use ChatGPT a lot.
It varies.
So let's say I want to learn a historical topic.
Instead of going to a Wikipedia page now, I'll actually go to ChatGPT because it can kind of give me like a more engaging learning experience.
Because I'll ask a question like, you know, "What happened in 1900, Spain?"
And it'll start saying stuff.
And then I can kind of customize exactly what I want to learn based on the AI's response.
-I also found it interesting that you talked about humans, AI eventually replacing humans.
So you gotta get the question, I imagine, a lot.
Will AI take my job?
What is your response?
-Depends on your job.
-What industries are most affected or stand to be most affected?
-There's going to be certain, you know, knowledge worker digital tasks that are going to be taken over by AI.
One example, actually in a roundtable discussion I was in about two hours ago, this company is working on developing customer service VoiceBots so that when you call this company, you're talking to what you think is a human, but it's actually an AI voice.
And right now there's a lot of contact center jobs with people on phones talking to customers; whereas, now instead of contact center jobs being held by humans, they'll be held by AIs.
-Let's walk over here, because there is an example of a company that could possibly threaten the jobs of southern Nevadans.
And it deals with hospitality.
Before I get your take on that, I'm gonna check it out real quick.
Let's head on over here.
And will you tell me what this product is all about?
(Jacob Korniak) Absolutely.
This is Furhat, the world's most advanced social robot.
Our robots are designed to speak and interact with humans.
-And am I too far off to think that this could be in hotels here in Las Vegas someday?
-Absolutely.
These robots are designed to interact with humans just as a hotel receptionist would in answering questions, improving the check-in experience, and being a friendly face to any sort of questions you may have coming into a hotel.
-Well, let me try to ask it some questions.
And I know that I can get a little bit-- I can complain, correct?
-Absolutely.
You can totally complain.
This robot is designed to interact exactly as any sort of regular hotel agent would and resolve any requests you may have.
-Let's see how it responds to me.
All right.
I just got into my hotel room, and the sheets are absolutely filthy.
What are you going to do about it?
-I'm sorry.
I didn't quite catch that.
Could you please repeat what you need assistance with.
-The sheets in my room are dirty.
-I'm so sorry to hear that.
I'll make sure to send housekeeping to your room right away to change the sheets for you.
Can you please provide me with your room number so I can inform them?
-My room is 509.
-Welcome to MGM Grand Hotel.
How may I assist you today?
-All right.
So not perfect.
But let me ask you just a couple more questions about it.
How many places is this in use currently, and at what other types of facilities outside of hotels?
-Absolutely.
So this is in a few hundred universities worldwide and a variety of different robotic research.
But more recently, in hundreds of different companies, using it for a variety of different general interaction.
So as you see today, hotel, but it could also be things like bank tellers or airport gate agents or anything to assist the user experience.
-How at risk are Southern Nevadan jobs because of this technology?
-Sure.
They're not at risk.
You know, this stuff is really meant to help and improve the gate agent experience.
So when those hotel receptionists have a long line out the door and are struggling with staffing and all of the variety of issues they face a day, this is the technology here to improve and help their experience and just make everything run smoother.
-And the last question: Of the places where this is being used right now, have any of them decided to just go with this alone and no people?
-So at this point, no.
There's no scenarios where Furhat is just operating by itself.
It really works best with human oversight and human help.
-Thank you so much for your time.
-So, Michael, as we just saw, the technology is not quite perfect yet for these robotics.
I want to get your take on it, though.
There are varying reports as to how much AI is going to impact Southern Nevada's hospitality industry.
How much do you think it will impact it?
-My perspective and the takeaway that I would share is, for the most part, hospitality jobs are going to be safe for a while.
If you have an in-person job in a hotel.
If you're at, you know, a front desk, if you're serving in a restaurant, if you're working in a casino, if you're a driver, these jobs aren't going to be replaced for a while.
If you have a back office job at a hotel, you know, like customer service, for example, like I mentioned earlier, then, you know, I might pay attention.
-Thank you so much for your time, Michael.
I'm gonna leave you here.
And we're gonna wrap up this little tour with a viral sensation.
This is Spot the robotic dog.
Can I talk to you real quick?
-Yes.
-All right.
Let's see what this can do.
And what is its purpose?
(Beshoy Daoud) This is actually Boston Dynamics' Spot.
But it is equipped with a BLK ARC laser scanner.
It's essentially a mobile mapping device.
So what it does is using Spot as a carrier, we're carrying the scanner into any environment of concern, of interest as well.
And then we're using the scanner to create a 3D map of that area.
And essentially, what we get is a digital twin of this area.
-And you did that for the Conference Center, which we're going to be taking a look at.
But tell me some other areas where this might go into.
-So one of the popular use cases is something like building construction.
A construction site really develops rapidly every week.
So if you were to look at the amount of work that it takes to document the progress, it's really labor intensive and how to keep the status update of that site with respect to the design.
So you can easily satisfy on a mission that it can go out scanning at the end of every day, and it will capture this information.
And then when the engineers come back on site the next day, the data is ready for review.
-This could also be used in a safety, an area of concern for safety, such as?
-If you think about a scenario where there is a fire damage in a building.
So doing an assessment inside a catastrophic area before sending people in to do their tasks.
This assessment is very important to know how--you're going to walk inside there--how you gonna keep people safe in those areas as well.
-Hey, last question: How much does Spot cost?
-Spot Boston Dynamics has a wide variety of accessories, so that can really vary in the prices.
But I think you're looking around anywhere from $80,000 to $100,000.
-Ooh-wee!
That is the price of maybe a condo, a real expensive car.
Speaking of expensive, that's perhaps what comes to mind when you think of hiring a lawyer.
But to prevent you from having to hire a lawyer, Joshua Browder came up with DoNotPay, which he claims is the world's first robot lawyer.
So, Joshua, you say that DoNotPay is the world's first robot lawyer.
But when we say robot, we're not talking about an actual robot in the courtroom, right?
What is what we're talking about?
(Joshua Browder) That's right.
It's not a terminator that fights in the courtroom, but an online AI service that helps consumers fight back against companies and governments.
Getting people off parking tickets, saving money on their bills, getting refunds, all of the areas where people are being ripped off in their everyday life.
-This is a subscription service?
-That's right.
-How much are people paying?
-It's $18 a month, which is a thousand times cheaper than a lawyer.
-Okay.
And how much money are you intending to save people?
The average person saves an estimated $400 a year.
-Okay.
What are the most common legal services you are providing via DoNotPay?
-Mainly consumer rights, so canceling subscriptions, getting refunds.
Bill negotiation is a big one.
So we have AI robots log into people's utility accounts and talk to the utility companies to fight back and lower their bills.
-Earlier when you spoke about this at the presentation you gave, you got a round of applause.
People are excited about the ability to negotiate with-- or not have to negotiate in person or on the phone, but have an AI chatbot do it for them.
Now, in these cases, do you always have to be right?
What if you are late in a bill?
Can you still utilize this service to get your electricity turned back on, for example?
-We give people the best shot of fighting back.
People live such hard lives where the system takes advantage of them.
They deserve an advocate to fight back.
And we will give them the best shot based on their circumstances.
-How do you do that?
How does this work?
-So it's interesting because we're at an AI conference, and the big companies are using AI.
So our AI logs in and chats to their AI and submits all the legal statutes and letters and makes the case for you and goes back and forth.
And so the two AIs are talking to each other.
-How do they know what to say?
How does yours know what to say?
-So we train it.
So we find all the successful cases, and we feed all these letters into the AI and then we say, generate this conversation for this individual.
-You are not a lawyer.
-That's right.
-So how is this legal?
-This is an area that's underserved.
There's no lawyer that will get out there to help someone with a $20 dispute.
And these big companies know this, so they know they can rip people off without anyone being able to fight back.
And so we're not trying to compete with lawyers.
We're trying to help ordinary people with something that hasn't been done before because it's too expensive.
-And I was thinking that everyone does have the right to defend themselves.
-Yes.
-Is it-- I'm sorry.
What were you gonna say?
-We're a tool to help people fight back, defend themselves.
You're right, it's in the Constitution that you can defend yourself and fight back for your rights.
And in the software AI age, we're a tool to give people superpowers to do that.
-There are some people who think what you're doing is illegal.
You faced a lawsuit in which the plaintiff claimed that you were practicing law without a license in the state of California.
What was your response to that?
-We wanted to do more complicated cases, like speeding tickets and actually bring AI into the courtroom, which is very exciting.
But then we started competing with lawyers.
And we're in Las Vegas, the capital of the billboard lawyers, and we got a lot of lawyers after us because they didn't want this software to threaten their jobs.
And so we decided to stay focused on these areas that everyone can agree on.
Everyone hates these utility companies.
No one can cancel their subscriptions.
And that's the area we're focused on now.
And-- -A gym subscription, for example?
-Yeah.
You have to sign a legal letter to get out of a gym subscription.
And it's such a broken country.
I joke, you shouldn't need AI to cancel a subscription.
But here in America, in Vegas, we do.
-Ah, gosh.
And that's part of how the system is set up.
-Yeah.
Because we're going back to our core focus, and these lawsuits are going away.
-Okay.
So what were lawyers threatened by, in your opinion?
The traffic cases?
So you are moving away from that?
Because that's how you got your start, really.
-Well, we started with parking tickets, which we're still doing.
But we can do-- what you mentioned, you don't have to go to court.
And that's the thing that matters.
Any online dispute can be done by us.
But when you have to stand up in court, we don't have a physical robot that can appear on people's behalf.
-So where do you stand with that goal of actually getting physically into a courtroom?
Because you wanted to have a chatbot with a defendant in a traffic court.
But what happened with those plans?
-We're working on finding a place that allows it.
The people writing the rules are the lawyers, and they don't want to be replaced.
So they're the first ones to kind of protect themselves from AI, because they're very good at writing rules.
They are lawyers, after all.
-Is that your ultimate goal, to get in the courtroom?
-I think that's our goal.
And we will achieve that because there's an access to justice crisis.
Over 80% of people who need legal help can't afford it.
And so eventually, something has to change.
But for now, we're happy to fight the evil big companies.
-And what will that look like if and when it happens?
A headset that is instructing a person?
-Technology is moving so fast.
It could be AirPods.
It could be a headset.
It could even be glasses where your whole defense appears in front of your eyes.
-Wow!
You mentioned during the presentation, which also got a ton of applause, what you're doing with robocalls.
What is that?
-So there's an amazing law which says that you can sue robocallers for $1,500.
So every time you get a spam call, you can get cash.
But people don't want to jump through the hoops.
And the robocallers hide behind hidden identities, and they don't say who they are.
So we've built a product to track them down and get your money.
And the way it works is they phone you up and they try and sell you something, and you can give them this fake DoNotPay credit card.
And when they try and run the card, it declines.
And it gets their business name, phone, address, which has all the details it needs to generate a letter and lawsuit to get that money.
-Wow!
Joshua Browder, I think that's something a lot of people can get behind.
So applause to you for that.
Thank you for your time.
And it is time now for our final segment which is an AI ethics discussion with Reid Blackman, Ethics Consultant.
So Reid, the 2024 elections are upon us.
Will you explain to our viewers what deepfakes are and how likely it is that they'll be coming across them during this election cycle.
(Reid Blackman) So deepfakes are just generated images or video using a tool called AI that, to the casual observer, is going to look like it's the real thing.
So recently, for instance, there's a picture of the Pope wearing a very expensive puffer jacket, but it was fabricated.
That wasn't an actual photo that anyone took.
It was something that someone made using generative AI.
And you can do the same things with political candidates--put them in compromising positions, put them in context in which they wouldn't want to be seen--and unsuspecting viewers can think, oh, that Presidential candidate really is in that context doing that thing, which could obviously be a very dangerous thing.
-How likely is it that Americans are going to be facing these during this election cycle?
-It's hard to say, but probably quite likely.
And there's really three sources of threats: You've got opposing political parties can unleash this tactic on each other.
You have state actors.
So say Russia or China generating false images and videos of Presidential candidates to cause disruption.
And then you have just lone individuals acting without the explicit direction of any political campaign or state authority.
Just they have an access to generative AI.
Most everyone does now if you have a computer and an internet connection, and they can create misinformation.
-How prepared do you think Americans are to discern what's real?
-Roughly zero.
There's basically nothing.
There's very little out there to help viewers understand what they're looking at.
The thing that's been floated the most is what's called a watermark.
So people are familiar with watermarks.
It shows us something is the ownership of someone or something.
If you go online now, you get an image.
It might say-- the image might have a Getty watermark on it to indicate this image is owned by Getty.
We don't have that yet for generative AI.
People are working on it.
But even that's not going to be, I don't think, a very good solution.
-The quality of these deepfakes, how-- how high is it?
I mean, how hard is it to tell if something is fake?
There was a time when you could kind of tell, right?
-In some cases you can tell.
You can do a bad job.
I mean, it's a tool.
And some people are going to reel that tool really well.
And some of the people will use it really poorly.
But we're not worried about the people who are doing it poorly.
We're worried about the percent of people who could do it well.
And the truth is, it doesn't take a lot of people in order to create things that are realistic looking.
And then the real worry is not just that people create these things, but then you have people that are online influencers who take that, retweet it, reshare or whatever, and then their entire network sees it.
So it's not just the fact that you can create these more or less high quality fake images.
It's also that they can be spread by people with various political motives.
-So you talked about the watermark.
That's something you can look for to determine whether it's legit or not.
What are some other solutions, and how far off are they?
-We don't have it, as far as I know.
The watermark solution is the go-to solution by technologists.
The idea being if you see an image and then you see that it says generated by AI, or something along those lines, the idea or the theory is supposed to be that then the viewers will say, oh, okay, this is not real, and discard it.
To my mind, though, that's not what's actually going to happen.
What will probably happen is people will see the images.
Even though they know it's fake, they'll have an emotional reaction to it.
So when you go see a movie, you read a book, you're crying when those people fall in love, you're crying when that person died, but you know it's not real.
It doesn't take reality to engage our emotions.
And a lot of our decisions are emotionally driven.
Do we-- do we feel like that person is trustworthy, for instance, so that we would vote for them?
The problem that I see is that you can be transparent or there can be watermarks about what is fake and what's real, but the emotional impact will already be had, which will also impact decision making.
-So a fake image can impact a person perhaps subconsciously?
-All the time.
You've probably cried at a Pixar movie.
-Uh-huh.
-You know it's not real.
It's cartoon.
It's still emotional.
So one of the concerns is that, yeah, we're gonna get emotional.
Our emotions will still engage with the content, even though we know it's not true.
-I want to move to chatbots now.
A lot of people will talk about ChatGPT.
A recent article from the Associated Press read, quote, Spend enough time with ChatGPT and other artificial intelligence chatbots, and it doesn't take long for them to spout falsehoods.
How accurate is that statement?
-Yeah, that's accurate.
I mean, it's pretty frequent that these what are called "large language models" or "chatbots," output, say, so to speak false information.
In fact, when some of the biggest companies released them and during their demos of showing the world what these things can do, they were saying false things.
Even if it was something like the length of the electrical cord attached to the toaster oven or something along those lines.
Or the price of the product.
Those are relatively low stakes.
There are higher stakes.
For instance, there are doctors using these kinds of things to consult with the chatbox about diagnosing patients.
That's obviously very high risk.
And, yeah, they're gonna output false information, because they're not drawn from a well of knowledge.
They're just drawn from the well of stuff that's on the internet.
There's lots of false stuff that's on the internet, so you're gonna get lots of false outputs.
-Also in that article, Microsoft Co-Founder Bill Gates was quoted as saying, "I'm optimistic that over time, AI models can be taught to distinguish fact from fiction."
Do you share in his optimism?
-One question is: What does he mean by "over time"?
You know?
There's five months, and there's 50 years.
What's the time frame that we're talking about?
There are various techniques.
I think it's really too early to say how optimistic or pessimistic we should be.
There are lots of technologists pouring lots of time, energy, money, intelligence into solving what gets called the "hallucination problem."
That's the problem of outputting false information.
And they're making some progress here and there, but it's a really hard problem.
I don't see a solution to it anytime soon.
But maybe we'll get really surprised.
Unlikely.
-I was wondering the same thing.
"Over time," how much time do you think it'll take?
We don't know.
-A century?
I don't know.
-And you don't know, either?
-I don't, no.
-And you don't want to estimate?
-No, I wouldn't put an estimate on it.
I mean, there are very-- what's happening is that technologists are trying to train it for very particular purposes so that you can use a chatbot for "this" thing.
Not all the things, just this thing.
And then maybe it's going to be reliable in this thing.
So for instance, Bloomberg, the financial company Bloomberg, news company, Bloomberg, they have apparently trained their own chatbot on just their data that they trust.
The idea being that the false outputs will massively decrease.
And from what I gather, that's what's happening, although there's not a lot of independent verification of that.
So there might be better chatbots that put out less false information, but it's gonna be very use case specific, very purpose driven, not, obviously, general purpose, which is what everyone has access to now, just use it for whatever you want.
-What do you use it for?
-So I've used it to generate stories to tell my child at nighttime because I somehow got into this routine of telling her a story about a teacher who defends the school from a threat that has a superpower or can turn into some object.
After doing this for like a year, I've run out of ideas.
And so I turned to GPT, and it gave me some really good ideas.
-You're using-- -Low stakes.
-Yes, low stakes.
-Yeah.
-Lastly, your advice to people who are using ChatGPT or other chatbots?
-Well, it depends on the context.
But, generally, you have to understand that it's not a reliable source of information.
It will speak very confidently.
So there's a way in which it has a personality of being confident, and that might fool you into thinking, oh, it knows what it's talking about.
But in fact, the confidence is misplaced.
It outputs false information.
It can be obstinate.
So you say, "I think you're wrong."
And it says, "No, I'm right.
I know I'm right."
So it really, if there's one warning, it's don't take this as an encyclopedia source of knowledge, an authoritative source of information.
It's just not that-- it's not that yet.
-Reid Blackman, Author of Ethical Machines, thank you for your time.
-Thank you so much.
-And thank you for watching.
For any of the resources discussed on this show, go to vegaspbs.org/nevadaweek.
And I'll see you next week on Nevada Week.
♪♪♪
Artificial Intelligence and Legal Services
Video has Closed Captions
Clip: S6 Ep5 | 6m 23s | Joshua Browder of DoNotPay explains how AI can be used for people seeking legal services. (6m 23s)
Ethics of Artificial Intelligence
Video has Closed Captions
Clip: S6 Ep5 | 8m 17s | Explore the ethics behind artificial intelligence with Reid Blackman of Virtue Consultants (8m 17s)
Video has Closed Captions
Clip: S6 Ep5 | 10m 30s | Amber Renee Dixon explores Ai4 with co-founder Michael Weiss. (10m 30s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship
- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Nevada Week is a local public television program presented by Vegas PBS


