VPM News Focal Point
Technology Expert on the Biggest Threats to Democracy
Clip: Season 2 Episode 17 | 13m 36sVideo has Closed Captions
Expert says AI, fake news and social media are not the biggest threats facing democracy.
Media studies professor and Director of the Center for Media and Citizenship at the University of Virginia, Siva Vaidhyanathan explains what he sees as the biggest threat to American democracy today.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
VPM News Focal Point is a local public television program presented by VPM
The Estate of Mrs. Ann Lee Saunders Brown
VPM News Focal Point
Technology Expert on the Biggest Threats to Democracy
Clip: Season 2 Episode 17 | 13m 36sVideo has Closed Captions
Media studies professor and Director of the Center for Media and Citizenship at the University of Virginia, Siva Vaidhyanathan explains what he sees as the biggest threat to American democracy today.
Problems playing video? | Closed Captioning Feedback
How to Watch VPM News Focal Point
VPM News Focal Point is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipBILLY SHIELDS: Our guest this week is an expert on the impact of technology on society.
He's written books about social media, search engines, and big data.
He's also a Professor of Media Studies and Director of the Center for Media and Citizenship at the University of Virginia, Siva Vaidhyanathan, thank you for joining us.
SIVA VAIDHYANATHAN: Oh, it's my pleasure!
BILLY SHIELDS: So, tell us about the focus of your teaching at UVA and the Democracy Lab.
SIVA VAIDHYANATHAN: Yeah, so one of the things we've been doing at the University of Virginia for the past few years, is exploring the essence of democracy, historically, philosophically, economically, politically, and in my case, technologically.
And we are committed to understanding this complicated relationship between digital technology and our aspirations for democracy, because of course, we would all love to live in a world in which our technologies allowed us to deliberate clearly and equally, and raise the whole sense of the knowledge base of society, so we make better decisions collectively.
But we also know that's not how it turned out, right?
It turned out that technologies drive us apart as much as they bring us together.
And so, the question is under what conditions, and how can we do better?
BILLY SHIELDS: Now, you've been at the forefront analyzing the tech world for decades, what's your top priority today?
SIVA VAIDHYANATHAN: Oh, my top priority today is to get the public to understand that the technologies that arrive in our lives are not given.
We get to resist, and we get to insist that they be better to us and better for us.
We can only do that by having a vocabulary about how technology affects our lives.
Having an agenda saying, look, we want to minimize surveillance!
Maximize privacy.
We want to have our own personal autonomy.
We also want the ability to speak clearly, openly, and freely to each other, but we would like to do so in an environment in which we're not harassed, and we're not dismissed, and we're not dehumanized, so that the best ideas can float to the top.
If we care about those issues, if the public demands those issues, then our regulators, our legislators will have to respond.
And we'll have to influence the technology industries to do better upfront rather than rolling out technologies, and then trying to fix them later, right?
What I would love is for our society to have a clear sense of what we actually want out of a democracy going forward, and then have our technology companies and our technology leaders respond to our clear demands.
That can help us build a better information ecosystem for the 21st century.
BILLY SHIELDS: So, what are your greatest concerns about fake news and the improvements in AI that make it even harder to distinguish real from fake?
SIVA VAIDHYANATHAN: Yeah, so I have a bigger issue than just fakeness, right?
Fakeness has always been with us.
The real challenge for us is not so much discerning truth from falsity.
That's always going to be a challenge, it always will be a challenge.
I don't mean to minimize it.
But the bigger problem is that the amount of garbage that arrives in our lives, and it doesn't all have to be fake, false, untrue, or misleading, it can just be twisted, narrow, or highly emotional.
All of that garbage that arrives in our lives distracts us, and makes it really hard for us to connect with our fellow citizens, and talk deeply and informatively about the challenges that are in front of us!
So, whether it's a challenge, like how to deal with the next pandemic, which could come in, 12 months or 36 months, we don't know!
How to deal with climate change, how to deal with human migration, which is happening all over the world, not just the southern border of the United States.
What are the consequences?
What can we do about it?
These are deep conversations we should be having with the best information in the calmest possible way.
And nowhere in the world are we doing that, largely because we are so overwhelmed by so much highly emotionally charged stimulation.
Again, not all of it untrue, just not healthy.
Again, not all of it untrue, just not healthy.
And so, I would love to be able to have a new system and a deliberation system, places where we can talk it out in ways that are calm and respectful, and informed.
That's our only hope, that I know sounds a little bit naive, (chuckling) given where we are right now, and the anger that we have in the world, but I do think that has to be our north star, and we have to aspire to improve ourselves, both as citizens and as technological consumers to get that done.
BILLY SHIELDS: You say social media undermines our democracy.
Why and how?
SIVA VAIDHYANATHAN: Yeah, social media undermines our democracy largely by keeping us segmented, keeping us in our tribes, in our interest groups, keeping us with a constant sense of affirmation.
If you (chuckling) engage on Facebook or on Instagram, or even on WeChat, at a Chinese system, that is the most popular social media system in China, those systems are designed to give you more of what you say you want, right?
So, every time you engage with those systems, somebody is telling you that you're really smart, and you're really funny, and you really know what's goin' on, and your opinions are correct, and you're very good looking right?
So, (laughing) all of that positive affirmation which keeps us going back, that's a good thing to have in your life, but when it comes to understanding complicated issues in the world, it's not actually healthy.
So, all of that stimulation over time, all that affirmation, all that segmentation, so that you're pretty much only conversing with people who agree with you, after a while, that solidifies, and really undermines a sense of citizenship.
Because a responsible citizen is one who is open and respectful of his or her neighbors, regardless of where they come from and what their assumptions are.
And that's hard work, (chuckling) that's always been hard work, it was hard work long before we had social media and cell phones in our lives, it's even harder now.
BILLY SHIELDS: How is AI going to change how we live?
SIVA VAIDHYANATHAN: AI is already changing how we live.
So, it's not even a future tense question.
We sometimes see these sort of science fiction visions of what AI could do for us, right?
That we could be walkin' around with eyeglasses that present information about everybody who's coming toward us.
That we could have lots of things in the world, stimulation that confuses us and undermines us, that we all could lose our jobs if robots can (chuckling) do our jobs better than we can, right?
All of these things are probably not worth worrying about anytime soon, largely because they all assume that AI will work as it works in the movies, and the movies usually aren't accurate, what we do have to pay attention to is the ways in which AI is currently being used.
It's currently being used, embedded in Facebook, embedded in Twitter, embedded in YouTube, embedded in Google, in ways that influence what we think about the world, what we think we know about the world.
And it's almost invisible.
We have no way of demanding to know what the principles and priorities of those systems are, we just don't have the laws and regulations that can get us to the point of demanding that those companies be transparent with us.
AI is also built into our legal system.
It's being used to determine this length of sentences in some states.
It's being used in cities like Los Angeles, where it's being used for predictive policing, where you take historical crime data, and it can help guide police to put resources in certain neighborhoods and not in others.
Now, of course, that means historical data is influenced by our racist past, and that's not healthy.
So, we actually have real world consequences of AI right now that we tend not to pay attention to because we're so focused on the future and on the science fiction.
BILLY SHIELDS: Along the same lines of real world consequences, how do you think that AI might affect the current crisis in the Middle East?
SIVA VAIDHYANATHAN: Oh my gosh.
Well, so one of the things we know is that AI has been tested for some time for nearly two decades on systems like missile targeting, right?
Which we can assume, that Israel, and with the support of the U.S. are using state-of-the-art missile targeting systems!
That means that part of what they're doing is probably influenced by AI, but we can't know that, right?
The systems are proprietary, and of course, military matters are highly secretive.
So, but we do know in general that AI is being used for things like missile targeting.
Beyond that, AI is being used for security in all sorts of ways, and it has been in Israel for a long time.
It's being used for facial recognition.
And that is increasing around the world.
It's being used for facial recognition in the United States by police forces too.
When people enter Stadia, for instance.
And one thing about facial recognition is it doesn't work very well.
And because it doesn't work very well, all kinds of false positives can come up.
And you can find yourself in handcuffs because you happen to look like the wrong person.
And that's especially true for people who have facial characteristics that are hard for AI system to pick up.
Like African Americans, or people of African descent who have a much harder time with facial recognition AI systems, because they're trained in inadequate samples.
And they're generally trained to identify the facial features of lighter skinned people.
So, we've seen this problem around the world.
And in a place like Israel, which is full of diverse people, it can create all kinds of trouble if the state depends on it too much.
So, I would keep an eye on that.
Missile targeting will really never know how well things work.
It'll always be top secret.
We might get some indication two, three years from now, but when it comes to getting a sense of who's on a train, who's on a bus, who's crossing a border, right?
Those are really important questions.
And those are the situations where we have to ask, do we want to outsource this very crucial decision-making process to a machine we are not allowed to understand?
I mean, it's one thing to outsource it to a machine, but to outsource it to a machine with no accountability, that will then influence the exercise of power over people, that's somethin' we really have to dig deeper into.
BILLY SHIELDS: How concerned are you about the impact of social media and fake news on the upcoming 2024 election?
SIVA VAIDHYANATHAN: Yeah, you know what's interesting?
I'm a lot less concerned about social media and what we tend to call fake news in 2024 than I was in 2016 when it was a real problem.
And it was not just a real problem in the United States, it was a real problem in the Philippines.
The year before, it was a real problem in India.
One of the things that I have appreciated about the ways that people have engaged over the past eight years, is we recognize these problems even if we don't fully understand them, and sometimes we simplify.
We do have a sense that our our media diet should be bigger than what we see on Facebook and Twitter, right?
We do have a sense that there's a lot of nonsense out there, and we should be suspicious.
That sense is growing, I'm actually pretty optimistic about our ability to keep that in perspective.
What I'm not optimistic about is our ability to be straightforward about the problems that we face.
Again, the problems like the migration of human beings, right?
Which is only going to increase.
And there's nothing any president can do about it.
The warming of the planet, which is only going to increase, and there's very little any president can do about it.
And the preparation for the next big health emergency, which no president can prevent, but a decent president can adjust to.
That we seem incapable of having grownup conversations about things like that?
Conversations that are not so heated, full of accusations, full of bigotry, that's troubling to me much more than any sense that there'll be an AI-infused video of one or the other candidate.
That's going to happen, whether or not people buy it or care about it is a kind of a separate, independent, almost trivial question.
No election will be decided based on a few AI-produced videos.
An election is going to be decided based on a thousand different influences in the world.
And the real question for us, not just for 2024, but for the next 40 or 80 years, is can we actually generate a healthy, deep, thoughtful democracy?
So, we can face these challenges without tearing each other apart.
Siva Vaidhyanathan, media professor at UVA, thank you very much for joining us.
SIVA VAIDHYANATHAN: My pleasure.
AI and olfactory-implant systems: Dr. Richard Costanzo, VCU
Video has Closed Captions
Clip: S2 Ep17 | 7m 33s | Dr. Richard Costanzo discusses deep brain stimulation research about the sense of smell. (7m 33s)
AI, implants & smell: Dr. Daniel Coelho, VCU Health
Video has Closed Captions
Clip: S2 Ep17 | 9m 2s | Dr. Daniel Coelho discusses deep brain stimulation research (9m 2s)
Artificial intelligence: What are the risks and benefits?
Video has Closed Captions
Clip: S2 Ep17 | 7m 46s | What does a future with artificial intelligence look like? We examine risks and benefits. (7m 46s)
Video has Closed Captions
Clip: S2 Ep17 | 2m 12s | Artificial intelligence is proving useful for medical breakthroughs (2m 12s)
Deep brain stimulation and disease: Dr. Paul Koch, VCU
Video has Closed Captions
Clip: S2 Ep17 | 10m 58s | Dr. Paul Koch discusses deep brain stimulation research and the treatment of disease. (10m 58s)
Epilepsy treatments: Dr. Kenichiro Ono, VCU Health
Video has Closed Captions
Clip: S2 Ep17 | 3m 17s | Neurologist Dr. Kenichiro Ono discusses deep brain stimulation and treating epilepsy. (3m 17s)
Is technology overuse hijacking our children’s brains?
Video has Closed Captions
Clip: S2 Ep17 | 3m 18s | A Chantilly, VA teacher says technology overuse is making kids less intelligent. (3m 18s)
Tech journalist and podcast host Kara Swisher talks about AI
Video has Closed Captions
Clip: S2 Ep17 | 19m 7s | Tech journalist and podcast host Kara Swisher says AI "can be a weapon but it’s a tool." (19m 7s)
Virginia Is The World’s Data Center Hub - What’s The Cost?
Video has Closed Captions
Clip: S2 Ep17 | 4m 37s | Most of the world’s internet data flows through Virginia. What do more data centers mean? (4m 37s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
VPM News Focal Point is a local public television program presented by VPM
The Estate of Mrs. Ann Lee Saunders Brown