HumIn Focus
Getting to Know AI: Are We Asking the Wrong Questions?
Episode 7 | 26m 46sVideo has Closed Captions
In this episode, we examine how A.I. is already at work in our everyday life.
In this episode, we examine how A.I. is already at work in our everyday life and what responsible deliberation about the potentials of these tools would look like. While engineers and entrepreneurs have often been driven to achieve greater speed and efficiency, our humanities focus asks about the implications of A.I. and what offloading tasks once done by human beings might mean.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
HumIn Focus is a local public television program presented by WPSU
HumIn Focus
Getting to Know AI: Are We Asking the Wrong Questions?
Episode 7 | 26m 46sVideo has Closed Captions
In this episode, we examine how A.I. is already at work in our everyday life and what responsible deliberation about the potentials of these tools would look like. While engineers and entrepreneurs have often been driven to achieve greater speed and efficiency, our humanities focus asks about the implications of A.I. and what offloading tasks once done by human beings might mean.
Problems playing video? | Closed Captioning Feedback
How to Watch HumIn Focus
HumIn Focus is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship(gentle melodic music) (light intriguing music) - [Michael] We're already living in a world that I think most of us would've regarded as speculative science fiction 50 years ago.
- [Victor] It lives.
- Life, consciousness, a machine?
- It intends to put itself section by section in the orbit around the Earth, and from that day forever forward, Earth will be its slave.
- [Kelley] AI is artificial intelligence.
It is kind of a catchall term for a variety of technologies.
- [Michael] In popular imagination, it could mean robots and cyborgs.
I think more generally it means smart computers.
I think that the meaning is it's sort of nebulous.
- [Cindy] Broadly speaking, AI is a field of study and practice that really thinks about how machines can think and do things intelligently.
- [Daniel] More than anything, AI describes the capacity of computers to do tasks that we used to think only humans can do.
- Everywhere you look some machine's doing the work that ought to be done by men.
Everything's machine, machine, machine.
The things got out of control.
We've created a monster that's putting us all out of jobs.
(gentle ethereal music) - [Automated Voice] Getting to know AI.
Are we asking the wrong questions?
(gentle ethereal music) AI has the potential to help humanity by enabling new discoveries, improving efficiency and productivity.
However, the extent to which AI will help humanity depends on the way it is developed, regulated, and used.
- [Reporter] The industrial revolution effectively freed man from being a beast of burden.
The computer revolution will certainly free him from dull, repetitive routines.
- We have always interacted with technologies.
Humans are a technological species.
We've always been in the business of developing tools that extend or supplement our natural human abilities in different ways, and I think the kinds of technologies we're building today are no different.
- [Reporter] The tool for extending certain of the powers of man's mind.
This tool is the electronic computer.
- I think there's always been the hope, a kind of optimistic hope, a techno utopian hope that developing intelligent machines, intelligent systems can improve upon human flaws to make us better in a variety of different ways.
That's what drives the development of AI and has driven the development of AI for decades.
- I think if you look historically, when new sciences emerge or new technologies begin to develop, the first reaction is excitement and optimism.
(intriguing music) ♪ How will I see ♪ ♪ In the 21st century ♪ ♪ On a needle up high ♪ - Focused for a while on the oftentimes amazing benefits that new technologies bring to our lives and I think that's a perfectly reasonable response.
And over time we become more attuned to the kinds of risks and harms that the technologies can bring.
- I told this, old Tom.
- High speed machinery goes in.
But men like you and me go on.
- With regard to what we're calling artificial intelligence, we're just in the middle part of that normal life cycle right now.
We spent several decades being really excited about the potential for all of these tools and now we've begun to see very dramatically the kinds of harms that they can introduce.
(eerie insipid music) - We worry these technologies will become so powerful that they'll sort of be more intelligent than we are and maybe pose problems for our ability to act autonomously and with full agency.
The fear is that they'll go beyond our human capacities and then we won't be able to control what they do.
- [Victor] It's alive.
It's alive, it's alive, it's alive.
- Going back to the early 19th century, Mary Shelly's "Frankenstein," and that doesn't involve a cyborg, that involves a very organic creature, but the idea is that here's an experiment in creating a sentient being that gets out of control and it's the gets out of control part, of course, that has made that novel resonate for over two centuries.
So I think the moment we realized that there was even the glimmer of a possibility of creating sentient life by some artificial means, we realized how terrifying that was.
- I think there's more dread and distrust of algorithms or AI because of this general uncertainty about how new technologies might sort of fundamentally change how we think about humanity or how our social lives function and that's just a historical trend.
We've always sort of been very fearful of new technologies.
(gentle melodic music) - [Automated Voice] AI simulates different types of intelligence such as perception, reasoning, learning, problem solving and decision-making.
The type of intelligence AI simulates depends on the algorithm, data, and application it is designed for.
- Well, it goes back to that idea of mimicry that we can't distinguish them from us.
That was not a problem with Frankenstein's creature.
Now we've got devices that pass the Turing test with ease, but can write persuasive essays that can get them into business school.
This starts to erode something we thought was the last stand in which we could base our distinctiveness.
That seems to be profoundly unsettling in a way that earlier technological developments were not.
- [Reporter] With the computer, as with any tool, the concept and direction must come from the man.
- [Cindy] I think it helps us to rethink what intelligence means for human beings.
- [Reporter] The task that is set and the data that is given must be man's decision and his responsibility.
- Intelligence has always been kind of narrowed down to a certain kind of rationality, right which is this kind of white male rationality from the West.
- I would like to- - And we don't think about intelligences that could happen for the non-human or the intelligences of like say people from the colonized nations and what kinds of definitions of intelligence necessarily exclude other forms of intelligence.
- [Reporter] The computer scientist once remarked that we have machines that compute with the speed of light, but with the intelligence of the earthworm.
- I guess if we're defining true intelligence in terms of human intelligence, it's not true intelligence.
It's an approximation, though sometimes we often talk about it as being synonymous or very close to human intelligence.
I would like to think that humans are much more complex than the kinds of intelligence that these technologies execute.
I think we can talk about artificial intelligence as being a kind of intelligence that doesn't necessarily have to be exactly like what humans do in order for it to be a kind of intelligence.
- [Reporter] There are many indications that data processing systems have permeated our society.
Transportation, insurance, and banking depend on computerized accounting and control.
- Technology scholars have been worried for a long time about the kinds of potential benefits and harms that different technologies pose.
- We have finally decided to put in the new labor-saving machinery.
Costs will be lower and production greater.
- That will be good news for the stockholders if it works.
- But computational tools have extensive reach.
They can scale very enormously.
They operate very quickly.
Decisions that are made in corporate boardrooms in Silicon Valley can instantly roll out and have effects on literally billions of people around the world.
AI has a kind of scale and scope and reach that other kinds of technologies haven't had.
- [Automated Voice] AI relies on quality and quantity of data it is trained on.
Which can be biased, incomplete, or unrepresentative of real world scenarios.
AI may lack common sense for domain specific knowledge, may can be prone to errors.
- Microsoft has unveiled some of the most powerful AI ever made public just yesterday and it's incorporating that into its Bing search engine and also its Edge web browser.
- We are much more aware of the ways that AI can fail us and the ways that they might produce mistakes that are often mistakes that sort of reproduce patterns of inequality that are really problematic.
- Can we ask it, "Can I trust you?"
- [Reporter] In fact, as the bot readily admits, this generation of AI is not yet to be trusted.
- So there's sort of a lot of negative consequences that come along with AI that mean that even though we have these hopes that they might improve human civilization in this like really grand way or fundamentally change how we think about humanity in a grand way, there's sort of a growing awareness that maybe it's not quite as rosy as we thought and there could be sort of more dangerous implications.
- You will always be able to trip any new AI model because you prompted it.
So I think we start with the responsibility each of us as users have to take, and yes, we will have many many mechanisms to ensure that nothing biased, nothing harmful gets generated.
- I do think that new technologies are and have always radically changed how we go about our day-to-day life, how we relate to other people, and so there is a sense in which the kinds of transformations that they bring are deeply meaningful and bare reflection and attention and critical analysis.
♪ Don't scroll, let me ask you something first.
♪ ♪ Can someone please explain how this algorithm works ♪ - Most people, I think encounter algorithms through social media and maybe like email, systems like that that we're using constantly, but of course, over time they're being integrated into other sort of spheres.
For a very long time too, they've been used in credit systems and banking and insurance, and now we're seeing a lot of them being sort of integrated into public services and government.
Because these are active live systems that people use every day, they're constantly ingesting more data from the users as they use the platforms and then finding new patterns as that data might change.
- We typically think about algorithms as just purely technical systems, right?
They are sociotechnical systems in the sense that we require the social interactions with technology for them to become working entities without the person annotating this training data sets or without a person who's trying to build this model itself, we would have no data science systems.
So in that way, they are cultural systems.
- [Reporter] Today, there are working mathematical models of railroad systems, rocket engines, complete reactors and whole living communities.
- One thing I wish we do more when we think about algorithms as cultural processes, as public problems is to really understand these things as not just technical entities that can be rechanged, right?
Just because we change the people around it or that we make computer scientists more ethical.
I think there is something deeply important within the humanities and important the ways that we are thinking and the way that we treat and understand the war that is very different from the training of a computer scientist that's being trained to think about the war in logical, discreet, rational ways.
- [Reporter] The designer must be able to state precisely what it is he needs to know.
This is not always so easy.
(gentle melodic music) - [Automated Voice] AI changes the way people interact with technology, making certain tasks more efficient while creating new risks.
It also disrupts existing social and economic dynamics, raising ethical and moral questions regarding privacy, bias, and responsibility.
(funky upbeat music) - Social media is something that many people are using multiple times throughout the day so those systems would be ones that people come into contact with more than credit ranking systems that maybe they'll once a year or every couple years they'll have some encounter with.
- [Daniel] As a result, we tend to notice the impacts of those tools on us less and less until some big catastrophe happens or they're brought to our attention for some other reason.
- There's growing awareness of these systems.
There's growing awareness that there is an algorithm on different social media platforms, but there is a wide variation in what people actually know about these systems from a technical standpoint.
- We are living in the future we always dreamed of.
We have Mixed Reality that changes how we see the world and AI empowering us to change the world we see.
- [Daniel] Artificial intelligence is, in my view more of a marketing term than anything else.
- So here's the question, what will you do with it?
- It's used to sort of sell a vision of computers taking over the kinds of work that humans would rather not do.
- One of the things that has has happened over the last 20 or 30 years is we have offloaded a lot of our memory to devices.
I think a lot of the functions that we once assigned to ourselves, we've now assigned to devices.
- There are a lot of scholars who worry about the impact of technologies on our virtues, on our character.
Part of the worry here is that the more we offload some of these difficult conceptual or deliberative or political or normative tasks onto machines, the worse we become at doing them ourselves.
We know that character takes practice and the less we practice, perhaps some of those faculties are diminished.
- Some research suggests that people trust algorithmic decision-making more for objective decision-making, things that are a little bit more clear cut versus subjective decision-making.
Most everyday users don't need to be super well versed in algorithms to accomplish what they want to accomplish by using different systems.
Most of the social media algorithms are optimizing for similar goals.
They are trying to give people content that will be interesting, that they'll care about, and that will keep them on the platform for as long as possible.
And in some ways, that is the core service that they're providing to people is taking this like vast majority of information and content that is in the world and then narrowing that down to a subset of stuff that the individual user will want to see.
- Wow.
- TikTok's secretly listening to us while we're watching videos.
I don't know.
- [Reporter] The answer to how this app gets to know you so intimately is a highly secretive algorithm, long guarded by TikTok's China-based parent company, Bytedance.
- On TikTok, the landing page when you go into the app is an algorithmic feed, so it's in that way you're invited to use the platform in this algorithmic context, so it's new content, fresh content that is purely just a result of an algorithm sort of recommending it based on collection of data about what users do on the platform and some inference about what content they might wanna see.
There's this growing perception of being able to know users really intimately, to know a lot of information about who users are deep down even more so than individual users might know about themselves.
I think we're looking for self-discovery in a lot of different places, and this just ends up being one of them, especially younger generations that it just makes sense that it's a place where people would be looking to enrich their self knowledge or to have these moments of discovery or revelation to kind of make sense of who they are.
And again, especially with younger users who are in their more formative years and trying to figure out who they are, this could be a very powerful way to think about and process who they are and who they wanna be.
- [Automated Voice] AI is training us to think more data driven and to rely on algorithms and automation to solve problems.
It is also promoting a culture of instant gratification and reducing our tolerance for uncertainty and ambiguity.
It may reinforce bias or limit our creativity and originality if we rely too heavily on it.
- [Reporter] The holy grail of artificial intelligence.
Building a computer that can do everything we can or even more, some believe that could help cure all types of cancer, eradicate poverty and create a more equal society.
- I think there is a very deep hope that AI will be able to solve all of our problems, and I think that sort of interest motivates us to emphasize above all what this technology might be capable of, if not in the present, in the future, rather than thinking about the limitations in the present moment.
- What we want is AI that enriches our lives, that is helping us cure cancer, that is helping us find climate solutions.
- [Reporter] But will the new AI arms race take us there or down a darker pass?
- I think the real risks these technologies pose are not that they're going to rise up and rebel against their human masters.
The real risk is that like most technologies, they're going to amplify the kinds of existing power imbalances, inequalities, other forms of marginalization and oppression that exist in our society.
I think to the extent that people fear AI when they hear that term, the fears are very much shaped by the kinds of depictions of AI in movies as these, again, sort of autonomous robots or systems that have their own objectives and we fear that those goals and objectives are not our goals and objectives.
- [HAL] This mission is too important for me to allow you to jeopardize it.
- What are you talking about, Hal?
- But I think what we should more realistically fear at least in the near term, is that our AI technologies are pursuing the goals and objectives of the technology industry which may or may not align with our own goals.
- Models like GPT-3, they are showing real emergent intelligence.
In fact, 30% of the code of anybody who is writing code in Visual Studio and GitHub is being generated by these large scale AI models.
So I think having a co-pilot for every cognitive task is right within our grasp.
- We actually grant them more power over our lives without actually stepping back and realizing like, oh, are we missing a larger picture of this?
And then we are buying into this contracted services of these large tech companies that try to integrate the systems into our everyday functioning live but we don't actually question like, do these things actually work?
Those are questions that I think we need to ask ourselves if we are really truly interested in trying to see outside or see the inbetweens of the systems and what it can actually afford us.
- [Reporter] A boy made invisible by mysterious scientific force held in the sinister power of the berserk electronic brain machine developed by the boy's father.
- It is really up for grabs as to in the future what we will have decided the future will look like in terms of our interactions with AI.
- People in general need to be educated about these kinds of technologies because ultimately, if they're going to reflect our shared social and ethical commitments, that's gonna come from a kind of democratic process, a kind of democratic oversight that assumes individual citizens can understand the risks that these technologies pose, will put pressure on their representatives to pass laws that regulate them.
We do need people to understand these technologies and the way they're being incorporated into processes, decision-making, institutions that they rely on every day in order to provide the kind of democratic oversight that is gonna be necessary to ensure our technologies reflect our values.
- It matters how we layer human reasoning on top of those algorithmic decisions.
So what we actually do with the knowledge that they produce, how we interpret it and implement it, that really matters.
And that's sort of a way I think we can have more of an intervention and preventing some of the negative impacts or mitigating them at least.
- But he's a robot.
Without you, what could he do?
- There's no limit to what he could do.
He could destroy the Earth.
- We're already facing a number of potentially existential threats.
There's real reason to believe that new technologies are making those problems worse, increasing those risks rather than decreasing them and making it more difficult for us to address them rather than less.
So the kind of polarization that social media enables, the enormous resource intensive practices that go into manufacturing our technologies, the huge amount of energy that training machine learning models requires.
Those dimensions of our new technologies do I think contribute to other existential threats that we're facing.
- The humanities needs to have a much more central role in deciding how the systems should be designed, how the systems should play out.
More and more, we recognize that these are not problems that should be tackled solely by tech corporations, but they should be also by public legal advocates or by philosophers or by humanists trained in literary analysis or trained in ethnography to give us a sense about what are the human parts of these algorithms that is not just solely about trying to fix the error, but really thinking about what this error tells us about the kinds of values that certain players privilege above others.
- Humans are a technological being.
We have always used technologies to achieve our goals, to interact with each other, to solve problems and so on.
At some level of abstraction, I don't think that our, even our most impressive and radical new technologies are changing that fundamental condition.
I don't think that they change what it means to be human.
(gentle melodic music) - [Automated Voice] AI creators should address the problems caused by engineers ignoring humanists by incorporating diverse perspectives in their development processes.
This includes involving social scientists, philosophers, and ethicists in the design and implementation of AI systems, providing clear explanations of how AI makes decisions and avoiding black box algorithms to ensure that their systems are designed to serve society's best interests.
(gentle melodic music) (bright melodic music) (bright melodic music continues) (bright melodic music continues) (bright melodic music continues)
Support for PBS provided by:
HumIn Focus is a local public television program presented by WPSU















