FNX Now
AI, Artificial Intelligence: What is the Future?
7/10/2023 | 26m 46sVideo has Closed Captions
A new frontier in technology, will it bring utopia or dystopia to the world?
A new frontier in technology, will it bring utopia or dystopia to the world?
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
FNX Now is a local public television program presented by KVCR
FNX Now
AI, Artificial Intelligence: What is the Future?
7/10/2023 | 26m 46sVideo has Closed Captions
A new frontier in technology, will it bring utopia or dystopia to the world?
Problems playing video? | Closed Captioning Feedback
How to Watch FNX Now
FNX Now is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship(film reel clattering) - Okay.
Good morning, everyone!
I'm Pilar Marrero, EMS associate editor and your moderator for today.
[background music] Today's briefing couldn't be more timely.
We are focusing on artificial intelligence.
Much has been talked about in the past year about artificial intelligence, a rapidly evolving technology that has been under research for decades.
We have seen the unveiling of chatbots that write scientific papers, legal briefs, news stories, sending shivers to the spine of every scientist, lawyer, and journalist, me included!
But, should we really worry?
On the other hand, some posit, that if used appropriately, AI can be a revolution for education, journalism, science, and other areas of human knowledge.
However, FTC chair Lina Khan, recently said that AE can deliver-- "AI", I'm sorry- can deliver critical innovation, but also, quote, "turbocharge fraud and automate discrimination."
Those are pretty strong words.
The panel of experts will provide the basics to understand what AI is, and what it brings to our societies, but to also discuss the potential bias in the AI data and the controversial use [background music fades] of copyrighted and creative material to educate the technology among other issues.
Our speakers are Hector Palacios, research scientist at ServiceNow Research.
He works on fundamental and applied research on artificial intelligence on the intersection of reasoning and machine learning.
Chris Dede, senior research fellow at Harvard Graduate School of Education and associate director of research for the National AI Institute.
Sean McGregor is a machine learning PhD, founder of the Responsible AI Collaborative.
Let's go first.
Hector, are you ready?
Go for it.
- [Hector] Thank you very much for the opportunity for being here today.
So, basically, where we are today, this is May 2023.
And then, suddenly, we have this text that seems to be written by human, but it's not human.
We know that.
ChatGPT is a keyword that we might find.
They go by these other names, like "Language Models."
And, the first things to keep in mind is that this is just software, right?
It's a program.
And, this is another kind of language.
All of these things come from mathematics, and from mathematics the whole idea of computing.
And then, these AI things that we have now that are basically sophisticated programs.
And, while mathematics, for people who know that, we tend to understand what is going on and computing is a bit more automatic, there was a moment where programs become so sophisticated that we don't know what's going on.
So, we're in front of a class of programs that have some emerging properties, but software always had that, right?
These were things that the computer tend to do, and it was sometimes hard to understand what's going on.
And, in general, this is also very connected to who we are.
I'm gonna make these connections, because at the end of the day, this sort of technology's coming back to us to something that is very intimate; that is text, in this case.
So, about AI, I like thinking about AI in general, about all the technologies in this whole area, about all this area, but maybe not as a science.
This have been around for perhaps 70 years.
And, I like thinking about AI like, like two lungs, right?
So, we have on one hand this system that consists of a consistent one about reactive behavior, about, you know, your guts.
You just react with that.
It's close to, perhaps it's close to what we expect from data, from a statistic.
Just take a shot.
And, on the other hand, this is part of us that can sort of stop for a second and thinking what is it we are going to do?
And, this is-- this is an image just to emphasize that there are this sort of two modes of operation that we have.
And, the system today also has this capability.
So, when we talk about generating text, we can think about the cost of making a mistake.
So, if it's just-- I can just say something, and it's okay.
Then, it'd be just going with my gut, with my instinct; it's enough.
And, in other cases, like I'm writing a contract, a contract of-- or I have a sensitive conversation.
In these cases, I manage to think about that, right?
And, basically what's happening is that all of these techniques, I specialize on this idea of this sort of system one behavior, this sort of gut reaction.
But, they don't-- they're not really good at this other side.
In general, this class of programs called Language Models, they-- some people have called them this sort of "stochastic parrots" in the sense that they, in principle, repeating variation they have seen.
And, we also start to see this sort of elements of generalization, and further consistency on text they have not seen.
So, there are some elements of that, but also because, end of the day, they are based in data.
So, that means that they're gonna-- they're also gonna try to remain close to the data.
So, if I said the "CEO of a company is-", they are trying to put-- they are more likely to put names of males, and perhaps white males and not necessarily a person of color, of another gender.
Right?
They would-- hope they would-- there are more consistency.
And, they think that we have passive threshold where this property have become even more useful.
And, this is basically because text doesn't have all the possible realities of the universe, right?
However, for situations when I'm not very familiar with the subject, they might actually convey some knowledge.
Or, if I am very familiar with the subject, I can quickly and sometimes I can quickly realize 'how is that good and how can I fix it?'
And so, then the next question is what do we do with this?
And, the two extremes are: this is just open bar- we do whatever we want- or we are cautious about it.
And, the answer is probably both.
Right?
On one hand, we have this risk about that because then we have a machine that can generate text meaning that if I have money, I can generate text that can lead to misinformation or manipulation.
I can increase over-polarization.
And, in general, I fear that this sort of loss in text especially like in other quick communication, right?
We used to-- it used to be the case that we see some text quickly and said, "oh yes, that was totally written by a human", but this is not true anymore.
It can be one of these things.
And, perhaps if somebody have a big enough budget, they might be generating many variation of that and perhaps one that is, like, close to me.
And, at the same time, there is some opportunities for automation or for reducing the cost of some task that now actually no one wants to do them.
Right?
So, I'm gonna try to stop now for just so people-- I think I'm gonna be over time-- and there are opportunities for some kind of tasks.
And then, I can go a bit deeper later on the-- on different angles here.
For instance, advertising, or generation of content.
Then, what is the issue of that?
Of copyright: this idea that sometimes something looks good and then we haven't really, like, looked at it.
So, we need to change our way to interact with text because of this.
And, in general, we need to think about the dangers and opportunities here.
I think I'm gonna stop for now.
Thank you.
- I wanna welcome our second panelist, Chris Dede.
he's senior research fellow at Harvard Graduate School of Education, and associate director of research for the National AI Institute for Adult Learning and Online Education, Dr. Dede, welcome.
- Well, thank you for inviting me.
I'll say a little bit about the National Research Institutes, then I want to talk about different ways that AI can be used.
And then, I'll talk about artificial intelligence and education which is my particular area.
So, in the United States, the National Science Foundation has now funded 25 national AI institutes.
Each of these is five years, 20 million dollars.
So, these are substantial investments.
Five of them are related to education, and the other 20: cybersecurity, improvements in agriculture, improvements in medicine.
You can imagine all the different things that might be involved.
So, the people who've thought the most about how these national AI institutes might help society are actually science-fiction writers.
Science-fiction writers have been thinking and writing about AI for the last half-century.
And, many of the things that are now appearing in the press are things that have been part of science-fiction writing for decades.
And, they've worked out a lot of interesting ways of thinking about the strengths and limits of AI.
And, if we want to be-- just overgeneralize, there's two kinds of science fiction about AI.
In one kind, AI is an independent actor, out of control of human beings and typically getting into a lot of trouble, which is what we see in today's headlines.
When I was a graduate student, I watched the movie "2001: A Space Odyssey" where AI goes berserk and starts killing astronauts, for example.
But, the other theme, within science fiction is AI is a partner, a partner in which there's a complementary relationship between a human being and an AI.
And, we see that, for example, in Star Trek: The Next Generation where you have Captain Picard, the wise human Starship captain, and then you have Data who looks like a person, but is actually an android, a machine that's based on AI.
And, Captain Picard and Data have a complementary relationship.
Data is capable of things Captain Picard is not capable of.
Data is capable of absorbing, as you would expect from his name, enormous amounts of data in a matter of a second, and doing what's called "reckoning," which is calculative prediction.
It's mathematics, as Hector was saying, and making forecasts of different kinds.
But then, Captain Picard, because he's a human being, is capable of much more than reckoning.
Captain Picard has sort of judgment, applied wisdom.
And so, he's the one that's in charge of the Starship, and he uses Data's calculative predictions to make, to help him make good decisions.
To illustrate this in a, in a less fantastic way, there are cancer specialists, oncologists now, who have AI partners.
The AI can do something that no cancer specialist can do.
It can scan every morning, 1,500 medical journals online and see if there's something new about the treatment of a particular patient.
It can scan medical records worldwide of similar patients undergoing a variety of treatments, and get advice about what's working and what's not working.
But, you would never want the AI making the decisions because the doctor knows things the AI doesn't know.
It knows, the doctor knows about pain and death.
The death affects the family as well as an individual and so on, and so on.
AI does not understand any of those things.
It's an alien kind of intelligence.
It's not a weak kind of human intelligence that's getting stronger.
It's an alien kind of intelligence.
So, what we see in that part of science fiction where you have human AI partnerships is something called IA; not AI, but IA: "intelligence augmentation," in which the whole is more than the sum of the parts.
The person does what the person does well; the AI does this reckoning, and the combination of the two accomplishes things that neither one can by themselves.
So, one of the things that this national AI institute that I'm part of, which is aimed at looking at workforce, workforce upskilling, and reskilling, helping people have productive careers over perhaps a half-century, is to look at AI-based assistants for instructors, and also AI-based assistants for coaches.
So, if you're teaching and sort of pouring knowledge and skills into students' minds, you have assistants that can help you with library searches, laboratory instruction, tutoring, student question answering, finding a good student learning partner in a large class, and so on.
They're very specialized and what they do is to take a lot of data, calculate, and make a recommendation.
The human being, then, is surrounded by those assistants, and the human being can be upskilled or deskilled.
So, if I, as the professor just say, "Oh, great.
"I can spend more time reading the newspaper now because the AIs are gonna do all this stuff for me", I'm actually deskilled and the student experience isn't any better.
It's probably a bit worse because the AIs don't understand all the things I just indicated about culture, and individuals having biological bodies, and ethical decision making, and so on.
But, if I upskill, now, I can be a much better instructor or mentor than I was before because the AI is taking over a lot of the routine stuff.
And, of course, this is nothing new in human history either.
The idea that machines can take over some things human beings do, and let human beings do better things, is part of the Agricultural Revolution, the Industrial Revolution, and so on.
So, (pauses) my concern is that the goals of education are not changing to take into account the things that I'm describing about improving the processes of education.
Because, how do we judge the success of educational systems?
How people do on high stakes tests, high stakes tests to get into Harvard, high stakes tests to get into law school, high stakes tests to, you know, determine maybe who gets hired for what position.
That's a reckoning.
AI does really well on those kind of high stakes tests.
And so, if that's our measure of quality, we're preparing students to lose to AI, to be incapable of intelligence augmentation.
So, the big story that I'm not hearing is the story about changing what humans should be learning as opposed to the story about "evil AI taking over," which is a risk, but only if we let it take over.
Or, the story about having an assistant that's useful, but maybe that assistant just deskills you because it takes over part of your job.
But, you're not prepared to do anything better.
So, I'll just say one more quick thing about bias.
(pauses) I've co-authored articles and chapters about at least five sources of bias that can occur in AI systems.
You can have a biased data set.
You can have a biased algorithm.
You can have biased people interpreting the outcomes, and so on.
And, with work, one can eliminate or greatly reduce those five kinds of bias.
And, that's something really important to do.
But, at the end of the day, AI is like a parrot, but it's also like a mirror.
It's a mirror that we hold up to the internet and it reflects back what it sees about our society.
And, a biased society will always produce a biased image in the mirror.
So, we have to not only fix AI, we have to fix ourselves.
- I'm gonna go forward to the third speaker, Sean McGregor.
Sean?
Please, go ahead.
- Alright, I will take from there.
Thank you.
Thank you very much for the first two speakers leading in here, and the great discussion there.
The first slide here talking about Microsoft deleting a, quote, "teen girl AI".
This is March of 2016.
Very-- one would hope it wouldn't happen again.
But, you go forward to January 2021, this time in Korea where a chatbot was pulled that had many of the same factors involved in it.
So, we clearly didn't learn from that past incident.
We can go to the Santayana aphorism of: "Those who cannot remember the past are condemned to repeat it".
This is lived time-and-again in human history.
And, we should ask, then, how do we remember the past in AI?
How do we make it so that we can make it safer, better, less biased?
That's where the project that I'm representing today via the Responsible AI Collaborative comes in.
The AI incident database is a collection of harms produced in the real world by AI systems.
It's inspired by similar systems in computer security, transportation, consumer product safety.
Basically, a thing happens in the world that produces a harm.
We record it; we make it discoverable.
And, very often we are recording it on the basis of reporting.
The people out there doing the hard work like a lot of them on this call today; just say, the result of more AI with the same safety culture of surrounding AI and technology is why we are getting this takeoff of AI incidents, and concern, and attention for it.
What the AI incident does is it collects each of these incidents that happen in the world, and puts a number to them that we can actually create that community around those particular impacts and make it so that we can move to make the whole AI industry safer.
The whole sociotechnical soup of safety is a complicated one that we need to build these information systems to understand them.
And so, each of the things you're seeing in this image are inspired actually by an incident in the database.
To give you a few more, little bit more fanciful incidents, this is the-- this is an incident that happened in China where a woman was shamed for jaywalking by a computer vision system that decided that this image of a woman on the side of a bus was a woman that was jaywalking.
So, she was shamed.
This is a case of a bus being mistaken for a woman!
Another incident.
In the incident database is a case where a vehicle was cited as a result of this image being taken of a woman's shirt.
It says "knitter".
It looks enough like the license plate that is K-N-I-9-T-E-R. And, this is an instance of a woman being mistaken for a car!
It's a weird and big universe out there.
And, we're making these systems that are very brittle.
They're more brittle than humans, and humans themselves are very brittle.
And, this is where we really desperately need to, in order to avoid harms produced by AI systems, really look across languages, cultures and geographies to collaborate and ensure that AI benefits all.
We're all out there experiencing the effects of AI systems and without bringing that experience back into one place, we're in trouble.
Who are we organizationally that are working on this problem?
Beyond a growing list of volunteers, we also have incorporated an independent U.S. nonprofit in 2020 called Responsibility AI Collaborative that is organizing incident database.ai, the AI incident database.
And, it's built to support and integrate the efforts of many different people and organizations and in fact, reporters as well.
We're predominantly engineers working to build that knowledge base or build the community architecture of AI safety.
Among the-- there's a bunch of features I'm not gonna go into.
One I want to concentrate on here today, though, is translation.
We're accepting reports of AI incidents in a 132 different languages, That is using AI machine translation, not-- it is not a perfect technology!
Incident 72 in the incident database wherein a Palestinian man was arrested because there was a machine translation of "good morning" that turned into "attack them"!
- [Pilar] Oh!
(Sean laughs) It's a very powerful and, brittle technology.
Something that's very useful.
Something we can use in society, but here we are.
So, what's next?
I will highlight that the places most likely for AI to produce harms are those that are farthest from where the AI system is developed.
A lot of AI right now is being developed in the United States, China, and Europe.
And, that is by no means the entirety of the world.
So, we desperately need all the people and languages, all the places that have AI incidents, which is everywhere, to collect those stories, submit them as instant reports and be part of making AI good.
I'm gonna say there's a lot of people to credit, a lot of organizations to credit here.
I'm not gonna mention all of them, but say it is a truly community effort.
- There's still a lot of questions that we haven't been able to answer, but we don't have the time.
And, I just wanna take one of the questions from Amar Gupta.
He was asking about the do's and don'ts for our young Generation Z readers.
What are the-?
Just maybe give me one "do" and one "don't" with AI?
Each of you.
And, anything else you wanna comment.
Hector?
- Everything you read or see, you know, doubt that whether it is coming from a human or somebody's trying to get in your head.
(pauses) And, don't just dub because something seems to be popular.
Just don't; don't rely on that.
I mean, learn to trust.
- Dr. Dede?
- Don't think that an AI actually understands what it's telling you in the way that even the most ignorant human being would understand what they're telling you.
Do understand when you enter the workplace that if you can't do things better than an AI can do them, you're not gonna have a job.
- Sean?
- Do not rest at believing that it's magic and it's just magical answer machine that is beyond reason or human comprehension for it.
There's-- these are engineered systems produced by people.
Peel back the mystery, play with the technology.
Understand it.
Understand its boundaries of performance.
(pauses) - [Pilar] That is so true.
Thank you so much.
Hector?
You wanted to add something?
- Yeah.
So, this is probably a message for this audience.
So, the way we humans go through big crisis is worth thinking, overthinking it all the time and then finally finding a solution together.
Right?
And so, there are many open problems here, and I think the most important thing that we need to do is to keep elaborating on the-- what we find problematic, what we find useful.
I mean, if we want a programmer somewhere to change whatever they are doing for configuring the software, we need that to become a PR problem!
So, their bosses go over them and they say, "yes, I read that, but I was busy last night at 3:00 AM and I just didn't check".
So, what we focus on is what's gonna be problematic and it's gonna be changing the directions that we want.
So, this conversation is where this happen, and this is basically like a new reality.
We have a new moon, and now we need to rethink all of this.
- Sandy?
I don't know if you just-- do you wanna say some final words?
We have one minute.
- I am in an absolute [background music] kind of (pauses) extreme moment of trying to process all of this!
(Pilar laughs) But, it feels so great to have these experts spend time with us and help us feel a... we can understand this.
We can get this.
And, I love this idea of this call to action, to peel back the mystery that it's not that we're incapable of deciphering what we have created.
And, we have created AI.
So, I feel very optimistic, whereas I went into this feeling like I'll never get it.
So, thank you (Pilar laughs) to all our speakers for your confidence in us.
- Thank you, everyone, to the speakers and to the reporters.
♪
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
FNX Now is a local public television program presented by KVCR