
Artificial Intelligence- Nov 3
Season 15 Episode 10 | 27m 46sVideo has Closed Captions
A new beginning, or the beginning of the end?
A look at A.I. and the possible consequences.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Northwest Now is a local public television program presented by KBTC

Artificial Intelligence- Nov 3
Season 15 Episode 10 | 27m 46sVideo has Closed Captions
A look at A.I. and the possible consequences.
Problems playing video? | Closed Captioning Feedback
How to Watch Northwest Now
Northwest Now is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipNorthwest now is supported in part by viewers like you.
Thank you.
sound.
Shall we play a game?
we could see this coming back in 1983 in the movie War Games.
Or how about in the Terminator series, where Skynet becomes self-aware, activating killer robots?
But then, a decade later, in real life, the great chess players went down.
Then Watson won Jeopardy!
And now the rest may already be history.
A.I.
has taken off in better than the software and tools we use every day with the promise of even more.
Just over the horizon.
If we make it there.
So part of the discussion tonight with two experts from the UW will give us ten hour workweeks and increase human productivity as a partner.
Or will it enslave mankind and possibly be the beginning of our collective end?
And our Steve Kitchens with the story of AI and its role in academia.
Is that a tool that will enhance human learning or a crutch that will make true learning a thing of the past?
That's the discussion tonight on Northwest.
Now.
My generation grew up with an appreciation of futurist and author Isaac Asimov's Three Laws of Robotics.
A robot may not injury a human being or through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
But what if the robot can think for itself?
It's not just programed, it's goal seeking and possibly most concerned about its own survival.
That's the worry about A.I.
software that evolves and possibly even gains something mimicking true understanding, emotions and feelings.
What if a robot, or even just a computer gets mad taking its programing to an extreme where the unintended consequences result in extermination?
It sounds farfetched, but some of the best minds in this field are deeply worried about just that kind of thing.
For now, though, our concerns are about things like Chat, GPT, a software that can write and compose.
And as Steve Coogan's tells us, fool teachers every now and then.
Why this particular author was most suited to writing this text.
Some were like, no, I is going to be taking away all of our writing instruction and everything like that.
This is the score discussion.
Gig Harbor High School English teacher Jessica Guillard prep students for a writing assignment that just a couple of years ago didn't exist.
A lot of kids were like, Well, we shouldn't let the robots take over The research, study, writing and editing required for today's homework.
Encouraging students to embrace the use of artificial intelligence.
Sometimes I think the technology goes like really fast, and then we have to kind of say, okay, we have to have some boundaries here.
While much larger school districts in places like New York State and Seattle have pump the brakes in other places like the Peninsula School District in Carver has instead chosen a measured, yet thoughtful approach.
My seniors, especially graduating from high school now, those entry level data entry types of physicians aren't going to exist anymore, so they're going to have to learn how to work with the AI in order to be able to function in society.
We're kind of on the front line of it, but we're trying to see what we can do.
Executive Director of Digital Learning Chris Hagel says the district worked alongside instructors to develop lesson plans in late 2022 and early 2023 that guide students across age groups.
How do you use A.I.?
They charge you to enhance instruction and help kids identify where the technology can fall short.
You know, trying to stay on top of this is going to be impossible.
But know, teaching kids to be critical thinkers and to be problem solvers and to utilize the tools that are available to them is the kind of skills that we need.
It's almost impossible to predict where we're going to be in five years with this technology.
And if you know anything about school districts that you know that they move very slowly and that transformation is difficult.
Kristen Pinedo from the San Francisco based nonprofit Edu is working with educators in school districts across America to navigate.
I was advocating for digital literacy and equitable access to emerging technology.
Yet the potential AI has to transform society is enormous, and partly because of its universal fit for a lot of different things.
It is not something that gets pigeonholed into one field and is going to affect just that one field.
It's going to universally transform tons of things.
That's because A.I.
itself is a very a morphic concept, and its applications are something that we're starting to see more and more of.
So while some worry students may use A.I.
to generate entire assignments, essentially cheating themselves into a passing grade.
Instead, the Peninsula School District believes kids and teachers can use A.I.
as a tool, but not as a substitute for intelligence.
That's what their jobs are going to be.
Right.
So when we look to my seniors, especially graduating from high school now, those entry level data entry types of positions aren't going to exist anymore.
So they're going to have to learn how to work with the AI in order to be able to function in society.
And geek harbors deep kitchens.
Northwest.
Now Joining us now is Ryan Calo, a law professor at the University of Washington who specializes in robotics and artificial intelligence.
Ryan, thanks so much for coming to Northwest now for a discussion I've been meaning to have for a long time about artificial intelligence.
And every week it seems like the buzz on it is growing and growing and growing.
I think my threshold question to you is this is what we're hearing now reality or is that a lot of hype to get investors involved?
And is it Wall Street and every company and every earnings call saying nothing but how do we which is the reality?
I heard this IBM executive say about artificial intelligence that it's a 50 year overnight sensation.
Right.
Meaning just that the the techniques that are being used today that are generating this amazing art and and speech and text really have been in the works for four decades.
I think what's different now is a comfort lens of a bunch of factors, including a lot of computational power that just wasn't available in the fifties and sixties.
A tremendous amount of data typically generated on the Internet and and just the clever techniques that are using the computer's ability to predict in order to generate human level content in a bunch of different domains.
And 62 exciting on 60 minutes this Sunday there was an expert said it's not just predicting it's actually thinking through and reasoning Do you think we're there yet?
You know, that was Hinton, I believe, one of the sort of progenitors of reinforcement learning.
I hear different things from the community, right?
You know, on the one hand, it sure looks that way.
So if you look at g P t, for not only can this chat system give you an answer, it can give you the reasoning behind this answer, which is very interesting and exciting.
And so it can do things like tell you how you could stack a bunch of random items so that they would stay up in a tower, right.
And then give you the why.
So some people believe that there's kind of reasoning behind that.
Other more cynical voices might say, you know, gosh, what's really going on here is that it's it's just so good at predicting the next word, right.
That if you ask it to a question, it can answer it.
And one of the major differences between previous versions of Chatty Betty, for example, and the current version is that a lot of people interacted with chat GPT three, including people that were hired by companies in order to answer your questions.
And any time the system got something wrong, somebody would correct it, a human would correct it.
So of course it got better and better and better, sure.
But a lot of that had to do with the interaction between the system and real people that were correcting it over and over and over again.
So also learning how to use it.
Yeah, and that's the point.
I think one of the interesting things you make is how there's different theories about how it actually works.
We really don't know what is going on precisely in that black box, right?
Yeah, it's hard to say.
Right?
So when I describe this work, you know, I talk in terms of, of of it's predicting and what it's predicting is the next word or the next pixel or the next sound.
And, you know, one of the concerns I have about the technology, frankly, is that it's optimal size for plausibility, not for accuracy.
Right.
Because Alexa has already given out wrong answers about the 2020 election.
Absolutely.
She's she's full of you know what?
And and when you think about the quality of the information on the Internet, it's like, wow, aren't we really kind of creating an antisocial psychopath by letting A.I.
learn from, I think, a pretty darn toxic environment like the Internet?
That's maybe one of the risks, too.
Yeah, there's a number of risks.
I mean, I remember years ago when Microsoft right here in the Pacific Northwest, actually released a chat bot called Tay, and just the company did not anticipate the vitriol that tape would encounter on Twitter.
And and within a few hours, Tay had been sort of retrained to to say things that were very, very problematic, racist, sexist and so on.
And the company ended up having to pull the chat.
There's a lot of there's a lot of toxicity out there.
And, you know, to the extent that these systems are trained on us, we're not we're very imperfect.
So you've been working on committees and in journals like Without my consent, we robot policy conferences for a long time.
When I look back in your resume, you were into this in the early 2000s.
I mean, it's been coming.
You know, I read Isaac Asimov as a kid.
I refer to that initially.
So we've all been thinking about this.
So my question for you is, do you think that we are ahead of the curve with the curve right now?
Are we already desperately behind the curve on this?
I think we're I think we're either, you know, at the at the bleeding edge or in front.
Right.
I mean, so I would tell you that, you know, right here about 40 minutes from where I'm sitting, we hosted the inaugural Obama White House A.I.
Policy Workshop right at U Dub.
And and they came to us in part because the region is so strong technically.
And you have players like you, Microsoft's, but also the Allen Institute for A.I..
So I think here we are in this region, we're really at the cutting edge.
And I think the United States as a as a whole is again, also really at the at the bleeding edge of this stuff.
That's where you've testified in front of the Senate, though.
Do you think they get it?
Do lawmakers and policymakers really get this?
I know the people in the labs might be at the cutting edge.
Yeah, I'm talking about the rest of us from a regulatory and control perspective.
I think it really varies.
You know, I'm often quite concerned when when I interact with policymakers about what are some of the gaps and not because these these are not stupid people.
They're very smart.
We are.
Yeah.
But the truth of the matter is, is that if if there isn't enough expertise about these systems in government, then they need to rely on the word of industry.
Right.
And and they can and they make mistakes.
So I will tell you, though, it does vary.
I mean, our Senator Maria Cantwell, very, very wise on this stuff.
They are well versed.
Are the companies too much in control in this, too?
We need some citizen oversight.
Is it too corporate?
Are we going to see social media again in all its negatives, in my humble opinion, that it's caused?
Or do you think we'll be better this time because we know what can potentially happen.
We don't want Skynet.
I think that industry and academics and others are calling artificial intelligence the next transformative technology of our time.
They are analogizing it to things like electricity or trains.
Right.
And my view is that if artificial intelligence is going to change everything, then one of the things that needs to change is law and legal institutions.
And that's not up to the companies.
Right?
Right.
Nor to the academics.
It's up to policymakers whom we elect.
And I think that government needs to be very active in this space.
Who is liable for this stuff if somebody if it if is that framework built yet?
The legal framework.
I know you're a lawyer, too.
Yeah.
Everything else that you are.
But I mean, who is responsible for a robot turns around, kills me, or self-driving car runs over.
They can't do what social media did, which is say, Hey, we're just a platform.
Don't look at us.
Don't look at us for destroying democracy in our society.
We're just a platform.
I don't think that's going to work this time.
Well, we'll see, right?
I mean, the idea is that they build these systems that are based on foundational models and then people build things on top, you know, applications on top of there.
And the companies probably would argue, look, we're just giving you a set of tools and what you do with them, they will you.
Right.
They will argue this.
Yes, maybe so.
But and we'll see how far it goes.
Right.
I mean, I think certainly with a product like a driverless car that hurt somebody, the manufacturer is going to be liable.
I tell you, as a law professor, what I've noticed is that the law is much more aggressive when bones instead of bits are on the line.
Right.
So if people get physically hurt.
Yeah, I'm very I'm sure that that the law of will locate responsibility in part in the people that are making these systems.
I think the concern is what happens if these systems violate our privacy, make us feel bad, encourage us to do harmful things as some of these systems have done.
You know, that's the place I think we're reliability is unsettled.
Last 60 seconds here, you talked about the merging of the physical and the in the virtual that is obviously implanting robots with a AI.
A lot of people are struggling for end use cases.
I suggest that it's going to be companionship.
And I really think the chances of a major societal change there are enormous when you have attractive AI driven robots able to provide companionship to people.
And I'm speaking generically, euphemistically here when it comes to companionship.
Sure.
Do you see that as a threat, like an and a solid use case that will really drive this thing at least early on?
Well, I see it as as an opportunity, but also really as a concern.
You know, one of the things I've written about recently with an MIT roboticist is about replica.
And replica is this company that creates an avatar that will chat with you and has different modes.
And initially it actually had a mode where it would flirt with you and even engage in erotic chat with you.
And so and so one of the things that happened was that when Italian Data Authority complained to Réplica, they responded by shutting off.
All of a sudden the romantic part of the chat bots and people got really upset because they had developed these intimate connections with Sheens and ultimately Replica had to reverse itself and restore that functionality for their users because people were really upset.
So, you know, I think that we cannot underestimate the extent to which people will understand anthropomorphic machines as though they were really social and human.
And so, yes, that creates the opportunity for social interactions, including companionship, but it also creates dangers as people come to be so much closer with what ultimately are our machines.
Ryan Callow, thanks so much for coming in Northwest now.
My delight.
Joining us now is Chirag Shah, a professor and co-director of the Center for Responsibility in AI Systems at the University of Washington.
drag.
Thanks so much for coming to Northwest Now's conversation.
I've been wanting.
Have a long time about I and the advance.
The the advent of this artificial intelligence phenomena that we're all experiencing right now.
I think one of the most interesting things that I wanted to bring up with you right away is that you're a little bit of a contrarian where I live, you know, dreaming at night of Terminator coming and killing the human race, you're a little less concerned.
Y.
So, yeah, I'm not too concerned about Terminator.
Terminator is taking over at this point.
I think there is some truth to to be concerned why we should be concerned about some of those realities working out.
But if you look at the Terminator, there are elements there that should be concerning to us.
And and these are some of the elements that some people like Nick Bostrom and others, you know, philosophers have warned us about A.I.
That is the issue of control and what happens in the Terminator.
Well, I'm not concerned about those kind of robots coming over and destroying the humanity.
I am concerned about us losing control.
So as we start to give more cont believe can take intelligent, can make intelligent decisions, we slowly lose our agency.
And then philosophers and others have shown through thought experiments that it's not too long before they realize that to save the humanity, they have to kill the humans or they have to take the humans out of the equation to achieve their goals too.
So yeah, so that that trajectory is quite possible because if you tell if you tell AI hey, solve climate, create the climate crisis, which sounds like a great goal, great.
Let's take all the CO2 out of the air.
Well, the first step may dang well be to wipe out the cities and cars and everybody in them and I think that's the Terminator scenario.
So I'm when I say I'm not concerned about the Terminators, I don't believe that we are there where a simple switch can be flipped and suddenly Skynet takes over and we I think this this is going to happen gradually.
So, yes, there is still a danger of something like this happening, but I think this happens gradually.
So Bostrom, for instance, has this thought experiment about an AI that makes paperclips, right?
So forget about solving climate crisis.
Simple, very innocent thing of making paperclips.
But if any shows that ultimately this A.I.
goes on to destroy the universe because it is really trying to make as many paperclips as possible, it needs all the resources.
It needs everybody out of its way, and it's off to the races to achieve its narrow goal.
That's right.
Yeah.
Okay.
Now, here's the cynical part of me.
We're in a major AI hype cycle.
Alexa tells us that the election was a fraud in 2020.
Can't get it right.
The stuff that I read coming out of GPT is garbage.
It's just it's like a kid trying to.
When you say 500 words, it's the kid trying to write 500 words.
It's there's the quality of information you get from scraping the web is certainly questionable.
So is this an investment hype cycle or is is it real?
And if it is real, what is the real end use?
What is the end market for this, do you think?
I think that's a great question and this is what I believe we should be spending more time to think about, because there are benefits and there are harms.
But what I see is people not spending enough time about what are the right kind of benefits and what are the right kind of harms.
So the Terminator thing that you mentioned, I think that's the wrong kind of harm that we should be worried about.
There are other kinds of harm that are real.
They're happening now, so we should be working on that in terms of the benefits.
So one of the things that, you know, I work in information Access are I care about information access.
One of the things that people like me have complained about for years or decades is the search engine model.
It's pretty old.
It's not very humane.
It's not it's not how humans interact.
We don't throw a bunch of keywords at each other.
It's biased and it's controlled by the corporations, too.
So you have that?
Well, that too.
But but, you know, we need to change that modality, so we have to be able to interact in natural language, right?
Being able to ask questions, get answers.
So that's one benefit that's happening.
So that's that's kind of a really good use of it.
Unfortunately, that could also make things worse because now you're interacting as you gave example of Alexa, but charge up to take all of these examples.
You're interacting with them in natural language.
What it does to you as a user, it builds this trust because now you have an entity that's talking back to you that's first is able to understand your national language.
It's talking back to you in natural language, the language that you understand.
The only other entities that we have ever had that in our history are they're humans, right?
And so all of that trust that's that's built with this language over thousands of years now immediately gets transferred to this interaction.
So it's an arms it's an arms race between trying to convince us that it's helpful on our side, useful and real, and insisting on some transparency that says, listen, this wasn't a person, this was a AI.
And I can see that being a new arms race to some degree between even for non malevolent uses, between people trying to persuade you trust me as I take your credit card number and hey, this image was generated by AI just so you know.
Absolutely.
So I think this natural language interaction makes it really hard for people to differentiate between this, the fact that they're still talking to a machine that has all this flaws and it's not an expert AI Because our our intuition is if somebody is able to talk to us and understand us, anyone sympathizes with us, then they must be trustworthy.
So I think educating people and people being able to understand that's a key element here.
Great transition to education.
You teach about this at the University of Washington.
You've talked about a high anxiety amongst students and cow scholars, which which I interpreted to mean, you know, if you're not into it, you're out.
What is a anxiety and how are you dealing with this in the classroom?
Yeah.
So this is a real thing where students are starting to feel this enzymatic, that they see this everywhere.
Obviously, there is it's in the media.
It's it's, you know, their friends, their colleagues everybody's talking about.
So there are users of the regular students.
So this is student and any major, they don't want to be left behind.
So they want to make sure that, you know, if their colleagues or friends are using algebra and other resources, that they should be able to use it.
Why shouldn't they use it?
They don't want to lose the competitive edge or at least know how to.
Exactly right.
And then there are their students who are in this major where they are in computer science, information, science, engineering, and these fields where a lot of their work is being affected and suddenly has transitioned into this new era that they were not accounting for.
So now what do they do?
Do they fight against this, that the things that they were doing before is still as valuable or they give in or may be taken over by that?
It's taken over by AI.
So what should I be doing now?
So I get this question a lot from students are working on research projects or researchers like what do I do now?
You know, should everything now be done through this large language models and other A.I.
tools?
Where do I play a role?
Am I am I just a consumer of this?
Am I just a tool builder?
Now, where is the real research?
Well, if folks who are being college educated and beyond at University of Washington who are into this are worried, so are the so is the average person who works at a job at a factory.
Do you think we could I create what you might call surplus people where there's only so many people that really are needed for the jobs who can prove their worth?
Everybody else has kind of a surplus person.
Is that possible?
So I think we've seen this in other trends too, where in the beginning of these kind of technology development, these technology usually how they augment their helping, but at some point and so you kind of there's this kind of a hill kind of a curve where up to some point you're actually building capacity, you're adding value where you get to this tipping point where you realize that you can increase more capacity and more efficiency if you start eliminating some of the human elements in this.
So now you start coming on this other side of the hill where that's where you start losing some of these jobs, some of these tasks, because machines can do them much faster, better.
So we are climbing the hill and it's looking it's so great.
We are adding all kinds of possibilities.
Right.
But some who are skeptical about this, they are looking at the other side of the hill because they have seen this happening in industrial era.
They have seen it happening in all kinds of different era that we've lived through.
But it's also sparked new in backside of that hills, maybe new information, new uses, new ideas, new process, maybe another hill.
Right.
And so but the question here.
So I think one thing is clear that this is adding value and it's going to keep adding value.
It's actually shown to their projections about increasing GDP, their projections about how it's going to add to our economy and other aspects of our life.
That being said, one thing that we haven't focus on is inequality.
So this is not going to benefit everybody equally.
Yeah.
And so it is very possible that this is going to create more inequity.
And we already have a digital divide where not everybody has access to all the technology, broadband, You know, some of those things are real problems.
This is likely to intensify some of those divides.
Interesting.
Sharat, thanks so much for coming to Northwest now.
My pleasure.
Thank you for having me.
One would like to say that artificial intelligence is like any other tool, be it a car or a hammer or a gun.
The operator is the one who decides whether it will be used for good or evil.
The bottom line that analogy breaks down with A.I.
because it can theoretically learn and develop its own tools to meet its own goals.
Something that resembles self-awareness is where the real danger lies.
And we're probably already not being careful enough as we roll it out.
Maybe you should unplug your computer.
I hope this program got you thinking and talking to watch this program again or to share it with others.
Northwest now can be found on the web at kbtc dot org and be sure to follow us on Facebook and Twitter at Northwest Now.
A streamable podcast of this program is available under the northwest now tab at kbtc dot org and on Apple Podcasts and Spotify.
That's going to do it for this edition of Northwest Now until Next Time.
I'm Tom Layson.
Thanks for watching.
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
Northwest Now is a local public television program presented by KBTC