At-Large
Artificial Intelligence, Surveillance and our Civil Rights
3/25/2021 | 57m 20sVideo has Closed Captions
The widespread bias in biometric surveillance.
Shalini Kantayya, Director/Producer of the film Coded Bias, and data journalist Meredith Broussard, give us insight into the widespread bias in biometric surveillance, artificial intelligence and data science technology.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
At-Large is a local public television program presented by Cascade PBS
At-Large
Artificial Intelligence, Surveillance and our Civil Rights
3/25/2021 | 57m 20sVideo has Closed Captions
Shalini Kantayya, Director/Producer of the film Coded Bias, and data journalist Meredith Broussard, give us insight into the widespread bias in biometric surveillance, artificial intelligence and data science technology.
Problems playing video? | Closed Captioning Feedback
How to Watch At-Large
At-Large is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship(bright music) - [Narrator 1] We all want to belong.
- [Narrator 2] We want respect and understanding.
- [Narrator 3] We want someone to have our back.
- [Narrator 4] We want a better life for the ones we love, that's why over a million of us are members of BECU.
- [Narrator 5] Because we know that together, we can do so much more than we ever could apart.
- [Narrator 6] It's not banking as usual, and that's exactly the point.
- [Narrator 7] So join us.
- [Narrator 8] BECU, more than just money.
- Hello, and thank you for joining us.
I'm Mark Baumgarten, Crosscut's Managing Editor and Host of the "At-Large" series, where we discuss the biggest issues with the people who know them best.
Tonight, we'll be talking about the documentary film, "Coded Bias" which explores issues with artificial intelligence, AI and questions about our civil rights and the future of our society.
A couple of quick notes before we begin.
There will be an audience Q and A at the end of the event.
You can submit a question in the chat that you'll see on the right-hand side of your screen, right over there.
Also, before we get going, I want to say thank you to our sponsor, BECU who has been providing financial services and support to the community for over 85 years.
Okay, on with the show.
So we live in a world that would have been unthinkable even a few decades ago.
Advances in fields such as biometric computation and artificial intelligence have given rise to a world of incredible convenience.
A vast storehouse of knowledge is available with a touch or a whisper, and this technology not only speaks to us, it seems to know us.
But while our technologies have the sheen of something new, they contain something very old, the racism, sexism, and other discrimination that has long been a part of our history.
The machines we have made don't just suffer from the ills of our society, they threaten to perpetuate them, and as this technology becomes more and more ubiquitous and invasive, it may become a threat to our civil rights.
That is the topic of "Coded Bias", the latest film from our guest, Shalini Kantayya.
Shalini is an award-winning filmmaker whose films have screened at the Sundance and LA Film Festivals.
She is a TED Fellow, a William Fulbright scholar and an associate of the UC Berkeley Graduate School of Journalism, and she is now working on a documentary about TikTok.
Joining her is Meredith Broussard, an associate professor at the Arthur L. Carter Journalism Institute at New York University, the author of "Artificial Unintelligence: How Computers Misunderstand the World", and one of the expert voices in Shalini's documentary.
Shalini, Meredith, thanks for being here.
- Thanks so much for having us, it's an honor.
- Thanks very much.
- It is an honor to have both of you here.
This is a wonderful documentary, I really appreciate you taking some time to talk to us about it.
And it is a complex documentary.
It is a very complicated story in a way, and there are very specific issues that intersect here.
But I'd like to start with more of a broad view and I hope you don't mind, but I wanna talk about surveillance, to start.
And at one point in this film, one of the subjects, Silkie Carlo from the organization Big Brother Watch in the UK pulls out a copy of 1984, which of course, is George Orwell's dystopian novel about a place where surveillance is all encompassing.
And she reads a passage and I wanted to read it here as kind of a setup for our conversation.
"You had to live, did live in habit that became instinct in the assumption that every sound you made was overheard and except in darkness, every movement scrutinized."
Then she notes that when she read that book in school, the idea of widespread surveillance was indeed fiction, but that now, the possibility at least is a reality.
So I wanted to start off by asking, how far are we on the path to George Orwell's vision?
And how far are we from the point of no return?
- Oh, that is such a fantastic question and you've just made my night by starting the night with Orwell, the book that I read when I was 16 and essentially changed my life, I think, when I was 16 and read that book.
It blew my world open.
And I think Silkie Carlo says it right when she said, when we read this as children, we thought this is a world that could never be, and now we're so increasingly living in that 1984 reality, and I think...
I didn't really feel connected when I started making this film because I think these issues are often talked about with the word "privacy", and it's a word that I sort of don't relate to, it feels like it has to do with privacy rights and if you don't have anything to hide, you shouldn't worry about your privacy.
And I think the word that I more identify with that I think is more accurate for the age that we're living in is an age of invasive surveillance.
And, I think that we are living in an age where states, as (indistinct) says in the film, states have wanted this kind of information about citizens for a long time and we as a society have begun to offer that up and as democracies, we have essentially picked up the tools of authoritarian states without any democratic rules in place about how these technologies should be used without any guardrails in place.
And I think we're much closer to 1984 than we would like to be.
I think the science fiction and documentary is sort of a reality that's closer than is comfortable.
- Another thing that that's Silkie says is that surveillance is affecting how we develop as humans, which I think is interesting to just think about the sort of warping, shaping effect that it may have.
Meredith, how is it changing our humanity?
- Well, I feel really fortunate to be one of the last generation to grow up before the internet, right?
So all the stupid things that I did when I was a kid are not preserved in social media, in amber, and I'm really grateful for that because I want to believe in the possibility of change.
I want people to be able to do stupid stuff when they're kids and realize that it's stupid and recover from a mistake and grow, and if we have this idea of invasive surveillance and we have this idea that everything has to be tracked all of the time, we're not leaving people room to grow and change.
And I think one of the really powerful things about this film and about Joy's work is that it gives you this really visceral understanding of what's going wrong with algorithmic decision-making, and also it gives you the hopeful idea that it's not too late.
So one of the things I always say is that we should be judicious about our use of technology.
We should think about using the right tool for the task, and sometimes, that tool is a computer and sometimes it's not, and it's okay to say no to technology.
- So Meredith, I wanted to ask you another question.
I mean, taking off from that, and really, the point of this documentary, I think, is that this technology not only impacts us, but it impacts us all differently.
And the main character who, or who I would call the main character, I think is the through line here and kind of the heart and soul of this work is Joy Buolamwini whose work at the MIT Media Lab has shown that biometric technologies driven by AI are discriminating against people who are not white, and especially people who are not white men.
And I was hoping that you could, for the viewers who maybe have not seen the documentary yet, explain to us why that's the case.
Why does AI see gender and race differently?
- Joy's work was really groundbreaking in that it was the first time that kind of the mass audience realized that there are big problems with these facial recognition systems, with the AI that powers facial recognition systems.
The systems are better at recognizing men than women, they're better at recognizing light skin and dark skin and they do not include trans or nonbinary folks at all.
Trans and non-binary folks are invisible to facial recognition systems.
And so it brings up this question of who gets recognized as human by these systems.
Now, the problem is that these systems replicate the bias that exists in the world.
One of the things that we do as humans is when we make technological systems, we embed our unconscious biases in those systems, and we all have unconscious bias, and we're all trying to be better people every day.
We don't wake up in the morning and say, "I'm gonna go oppress people."
That's not what developers do, but you can't see are unconscious bias because it's unconscious, right?
So when we have homogeneous groups of people creating technology, the technology gets the collective blind spots of the group that is making the technology.
And we can look at the composition of Silicon Valley and we can see that it's largely white and male.
Silicon Valley has had a diversity problem since its inception.
They have never put in the work to change it, they're not now currently putting in the work to change it and it is in a certain sense unsurprising that they are making technology that is deeply flawed.
Joy's work kind of brings to the surface all of these flaws, all of these tensions, all of these longstanding failings of Silicon Valley and the failings of the technology that is created by two narrow groups of people.
- So there's something very interesting about this documentary.
Shalini, and I have a question about it, it's kind of a craft question, I guess, or about a making of a question.
So as Meredith pointed out, AI is engineered to favor those who created it, and that is white men as the documentary really explains very clearly, yet so many of the sources, almost all of the sources that you rely on in this documentary are women, or are not white men.
And I thought that that was interesting, and I guess my question as I was watching it is, was that an intentional choice or are the people who are doing this work that you are reporting on mostly not the people who are represented within the systems and organizations that really build and develop and distribute this technology?
- Well, I appreciate the question.
I will say when I started making this film, I didn't know what a revolutionary act it would be to have a film about future-leaning technologies that centered the voices of women.
We're so accustomed to seeing films about technology that are mostly men and mostly white that we often don't even question it when we see it.
And I think, for me, I am a filmmaker who is conscientious about whose voices I amplify.
I am someone who is very thoughtful about the decisions I make about who I label as experts.
That being said, I didn't set out to make a film that would be predominantly women and predominantly people of color, it was actually the research and I do have a rigorous process of research when I make films, that kept leading me back to the brilliant and bad-ass voices in the film including present company Meredith Broussard, and what I realized is that the people who are leading the fight against bias and for more ethics in artificial intelligence are actually women, are people of color, are LGBTQ, are religious minorities, and what I think is common to the cast is not only are they likely the smartest group of human beings that I've ever been in the company of, I think there's seven or eight PhDs in the film, I think Cathy O'Neil once tried to explain to me how she knitted the equation for pie into a sweater.
I only really get half of what they say, and I think half is a lot, but they have advanced degrees in data science and mathematics, but I think also, we're outsiders, we're misfits, we're marginalized in some sense, had an experience where their experience was not centered and that allowed them to shine a light into some of the unconscious bias of Silicon Valley.
And so it's been my experience in the making of the film that the group of people, the demographic of people represented in "Coded Bias" is actually quite representative of who's leading the fight for ethics in AI.
- And Meredith, I mean, you're a part of this community.
So can you describe the community for us?
I mean, is it close knit?
I mean, I know that Joy has created the Algorithm Justice League which is a part of this film and are you a member of the AJL and- - I am.
I am proud member of the Algorithmic Justice League.
I took their Safe Face Pledge when it came out a few years ago and they have a really exciting new initiative going on about proactively finding the problems in artificial intelligence systems.
The community, when I started working on AI ethics, the community was pretty small.
And I feel really, really grateful that the community has blossomed as much as it has.
I mean, Cathy and I were friends before the film, like all of us, our friends and colleagues who appeared in the film.
So Shalini did a really amazing job of finding the community and elevating voices from it.
And of course, we're all immensely, immensely proud of Joy and supportive of each other's work.
Places that you can go to find out more about the community, you should start with the Algorithmic Justice League, the PBS site for "Coded Bias" has a bunch of resources.
I helped to run something called the Center for Critical Race and Digital Studies at NYU, which is an academic kind of think and do tank where we work on digital studies as it intersects with race.
There's also a new field called Public Interest Technology, which very much intersect with AI ethics.
So if you're the kind of person who is interested in getting a job in civic technology or making technology that helps governments run better, or helps governments to build vaccine sites that don't crash, for example, Public Interest Technology is the place to go.
And, what was the other thing I wanted to mention?
There's a great reading list that you can compile based on who's in the film.
I would also recommend, Ruha Benjamin's book, "Race After Technology", and of course, you should absolutely read it if you haven't already, read "Algorithms of Oppression", read "Automating Inequality", read "Weapons of Math Destruction", read "Twitter and Tear Gas", and maybe also pick up "Artificial Unintelligence".
- While you're at the bookstore.
(indistinct) - All of the books she just mentioned are on the "Coded Bias" Take Action site and all of the organizations are also there also, if you wanna connect to the body of knowledge and to the community.
- All right, thanks for that.
Shalini, let let's come back to sort of the topic itself.
I was curious of when you were doing your work on the film, when it comes to the harms associated with the inaccuracy of biometric technologies, was there an example of the harms done by that that was surprising to you, striking to you, something that could maybe illustrate for the audience what exactly is at stake here?
- Yes, I think I have never recovered from seeing a 14 year-old-child being stopped by five plainclothes police officers in the UK, a Black British child in school uniform, wearing a school uniform suit is accosted by five plainclothes police officers, fingerprinted, held for 10 minutes, asked for all kinds of questions and did not have any sense of why he had been stopped.
And, it was only because I was there following Human Rights Observers and only because I was in the UK where there are laws where this process would be transparent to a journalist or a documentarian that I could actually capture a moment like that in camera.
And I think I maybe watched the scene 200 times in the editing room and every time I watched it, even talking about it, I get goosebumps because it's a moment where you see a child being traumatized by technology and I live in New York where we had stop-and-frisk and it was just incredibly emotional.
And I think there's also that moment in the film where you see a UK citizen just going about their day and trying to avoid the real-time facial recognition cameras by UK police who are trialing this technology at the time and pulls his scarf up over his face and is ticketed.
And I sort of use the parallel of apartheid South Africa to remind us all that in a democracy, police aren't supposed to be able to arbitrarily stop us on the street like that.
That's not supposed to happen in a free society.
And I think in those moments which are still incredibly chilling to me, even when I watch them after so many times watching them, because I literally saw the moment where these technologies overstep into our civil rights and civil liberties, and I literally saw it happen in front of me.
And I think what is so terrifying to me about Joy's discovery of racial bias is that this was not a technology that was being beta tested on a shelf or in a lab somewhere, this is technology that was actively being sold to the FBI, actively being sold to ISA immigration officials, actively sold and deployed most times in secret by police departments all across the country with no one that we elected, no one that essentially represents we, the people giving any government oversight in how these technologies should be used.
And I think, that was chilling that we have Big Tech selling directly to law enforcement with no rules in place, and that's where my fear is, is when democracies pick up the tools of authoritarian states without any kind of policy or democratic rules in place that would protect citizens' rights.
- Yeah.
I absolutely agree.
And something else I find horrifying about is that we spend so much money on this technology and it doesn't work.
It has never worked.
And people keep saying, "Oh, well, it's a little better now.
We just made it a little bit better."
And it's a fool's errand that people have this fantasy that we're going to be able to put cameras everywhere, we're going to be able to connect computers to the cameras, and the computers are going to make objective, unbiased decisions that are gonna get us away from the essential problems of being human.
And it's an absurd notion and it's been with us for centuries, for generations, and it has never worked, it's never going to work, we're never going to be able to automate our way out of being human.
So it's just a waste.
- All right, and this is really the nitty-gritty.
I mean, this is where it gets really kinda complex to try to sort of parse and understand what the issues are and what the solutions are, I think.
So I'm gonna ask a few questions that sort of navigate this terrain, and the first one is based on something that Cathy O'Neil said, which you mentioned her earlier, the author of "Weapons of Math Destruction", really brilliant thinker, and she says the underlying mathematical structure isn't racist or sexist, but the data embeds the past, which is what you said earlier Meredith.
And my question is, if algorithms rely on the past, which is a part of what they are, how can we build them to avoid the evils in that past?
How is it even possible?
- I mean, it's not.
It's not possible in some cases.
People imagined that it would be, but it's simply not.
One of the things that...
So you're right.
Cathy writes about this a lot, Joy says in the film that the past dwells in the data.
So the way that machine learning algorithms work is you take a whole bunch of data about the world as it is and you use this data in order to construct a model to make predictions about how things will be or should be in the future, but the problem is that when you're taking data about the past, that data reflects all of the sins of the past.
So if you're trying to use data to construct a model to decide who gets a mortgage, well, you are including all of the historical data about racist housing policy, about redlining, and you're saying, okay, only the rich people who have had mortgages in the past should be able to have mortgages in the future.
And that's not the world we want.
We want a world where there is economic mobility.
We want a world where you can change, where you can grow, where you can have a future.
That's the American dream, right?
And so imagining that computers which are just machines for doing math, imagine that computers can do things that they can't is a recipe for disaster.
One of the things that I write about is an idea that I call technochauvinism, the idea that computers or computational solutions are a superior to others.
And technochauvinists say things like, "Oh, computers are more unbiased or more objective than humans and therefore better."
And it's not a competition, it's about the right tool for the task.
- So should we just abandoned algorithmic thinking?
I mean, or just, put it in its place and just apply it to places where it feels like it will do no harm?
What's the solution?
- I mean, it's not reasonable to say, "Oh, we should not use computers."
That would be crazy.
Computers are great.
I write a lot of code, I build AI tools for investigative reportings, what I do in my academic research, and computers are great.
We should use computers for the things that computers are good for, and we should not use computers for the things that computers are bad at.
And when it is clear that computational decisions are racist or sexist or ableist or ageist, we especially should not use computers then.
- So this, I think, speaks to this other thing in the documentary, which is that there's the inaccuracy, but then also, there's the way that the technology is employed in what it's intended for.
There are examples of targeted advertising, credit scoring in China creating a kind of caste system there and biometric technologies being used to surveil people living in certain communities.
These are benefiting some people while trapping or even targeting other people.
And so, I came away from the documentary just wondering, there's a part of the documentary that's focused on improving the efficiency of these technologies, so that there is sort of equal recognition of all people, but the question that's hanging out there for me is if we are perfecting that technology, aren't we just creating a more efficient tool of oppression?
- I think you're right about that.
There's a few things, I think Silicon Valley has a radical exclusion problem and there's the gaze of technology.
I'm a filmmaker so we're sort of used to talking about the white male gaze in cinema or in filmmaking or you had campaigns around the Oscars that sort of highlighted that, but I do think that's only one issue that we're dealing with with these technologies, and I think that often when I speak to technologists, they just say, "Oh, you just had bad data.
We can just fix the data and everything will be perfect."
And I think where I differ and why I'm so grateful to people like Meredith Broussard for giving me an education, because I think some of this has to do with our own literacy as a society and the fact that your ten-year-old is going to start using these technologies and we don't actually really understand how they work and what they can do and what they can't do, but I think for me as a filmmaker, one of the things I tried to highlight in the film is that it's really not about building the perfect algorithm, it's about building a more humane society.
And I think what Meredith Broussard's book "Artificial Unintelligence" points out so well is that we've been sold a bag of tricks about this idea of the solution is in this great white knight of technology that's gonna perpetually save us from every problem that we have.
And sometimes the best solution is actually a more human one.
And so what I hope "Coded Bias" does is help reframe the conversation around, is our goal to be as efficient as possible, because we all have that moment in like China, where we're like, "Wow, I could buy a candy bar with my face.
That's really cool, how efficient.
I could pay for dinner with my face.
I don't have to think about who I can make friends with, I can just look up how many Instagram followers they have."
That sort of algorithmic obedience training that Cathy O'Neil talks about, but I think to me, part of it is about the way that we're using technology and how we build a more human-centered society where the tech is in service of human beings and not the other way around.
- I'm glad you brought up China because this is something that I also wanted to ask you about.
So the documentary takes us to China, we follow a young woman as she goes about her day, as you've said, using her face to purchase things, to get on the subway, talking about credit scoring and the way that it helps her to decide who's worth being friends with and at the end of it, it is such a surprising part of the film because I think that the expectation is that you're going to tell a story like that and then there's gonna be some big payoff that shows how horrible it is, but really, the subject that you're following really likes it.
It's very convenient, it is a world within which she moves smoothly and finds to be very satisfying.
And really, I mean, it's brave filmmaking to have that in there and let it sit with the viewer and not be heavy-handed about telling the viewer what to think about it.
And so I have to ask this question that that example sets, which is that, is it really so bad?
I mean, is it possible that there could be a future where we are as happy as the woman you followed in China is?
- Could we all love Big Brother?
- Yeah, could we all love Big Brother?
- Is that the question?
Could we all love Big Brother?
Well, I think that vignette is sort of a dark mirror episode inside of the documentary.
I think it's important to point out that China doesn't have freedom of press and I would have endangered someone, it would have been dangerous for someone to speak up against the social credit score.
What she said was exactly what she said.
I was as surprised, we were in the edit room, I didn't get real time translation, and so it wasn't until editing months later that I actually got the translation, and my editor and I were all wide-eyed like, we couldn't believe, especially 'cause she looks sort of counter-culture and she's a skateboarder, and at the same time, I thought it was so important to represent because we wanna think that that's a galaxy far, far away, but I think that that young woman is actually a reflection of ourselves.
We all have that moment we're like, "I'm gonna press a button and a car is gonna come, and I'm not gonna think about labor rights of what happens to that driver if his score falls behind or her score falls behind a four point something, which is when they throw you off the platform, this black box algorithm throws you off the platform and I think when she talks about, "I don't need to judge anyone, I can save so much time about who my friends are based on the social credit score."
Well, how often have we judged someone based on how many Instagram followers they have, or how many likes they got on something?
And the way in which Cathy O'Neil says so poignant in the film, she calls it "algorithmic obedience training" and the way that we are being trained by these systems.
And I think if there's one thing on the cutting room floor that I agonized and agonized about is Meredith explaining that these algorithmic systems are based on popularity and what is popular is not always good.
And so to me, that China episode is that sort of vignette within the documentary is just such a reflection of where we are as democratic societies with this technology that we're on the precipice of that and we're sliding into it without even knowing.
So, I wanna remind the viewers that we have a question and answer session coming up in just a couple of minutes.
So if you have some questions, please get them in now and we can hopefully make them a part of the lineup when we ask those questions.
And I just wanted to ask one more question from myself, and it really has to do with...
There's a sense of inevitability to kind of where we're headed when it comes to algorithms, that this train is not stopping, that we are going to continue to become more and more immersed in this technology.
And yet this documentary shows people fighting and fighting in a lot of different ways.
I mean, I think that that was actually one of the thing that's really interesting here, is that you have Big Brother Watch, which is doing some real, like almost direct action kind of activism, and then you have Joy who is really connecting a larger population to this issue by using creative means, I mean, her spoken word poetry and then even the Algorithmic Justice League, I mean, these various sort of pop culture ways of connecting.
And I was just curious, what are you seeing that gives you hope that we will be able to reverse course here and not end up in a worse place than we are right now?
And I'd like to start with Meredith and Shalini, if you could close that out, that'd be great.
- Well, Mark, I'm gonna disagree with you a little bit.
I don't at all think that it's inevitable that algorithms are going to take over.
One of the things that I care a lot about is empowering people around technology, building computational literacy so that people feel empowered to push back, to fight back.
We do not have to close our eyes and think of England when it comes to technology, we can speak up in a democracy and say, "No, we do not want racist facial recognition algorithms used at our airports."
And in the wake of Joy's work, there have been several important steps.
Cities like, I have to read this down, oh, Berkeley and Oakland and Somerville and Brookline and North Hampton and Cambridge have all banned use of facial recognition technology by police in the wake of Joy's work.
The Big Tech companies announced that they were putting a pause or stopping entirely selling facial recognition technology to police or developing it for police.
We can stop the train and we have the power in our hands to course-correct.
We do not need to be prisoners of the past.
So I think that for many years we were sold a bill of goods around technology.
We were told, "Oh, it's too hard.
Technology is too hard for you to understand.
You should just buy this thing and it's gonna make your life easier and seamless and don't think too hard about it."
And that's just marketing and we don't have to believe it.
Technology is a little bit hard, it's true.
It's a little bit hard, what computers do is math, and I mean, it's really hard to explain in a soundbite, honestly.
I'm a professor, I teach people things for a living and I'm gonna be totally honest with you that no, it's not really easy to understand what's going on in artificial intelligence.
But it is not impossible, it just takes like a little bit of working at it and then you understand it and then you can see, "Oh wait, we don't have to be controlled by the decisions that were made in the past, we can forge a new future."
But in order to do that, we do need to push back against technochauvinism.
People do need to educate themselves, empower themselves and we also need policy changes and we need regulation.
So it's not enough for individuals to wear makeup, for example, the defeats facial recognition technologies.
We need changes at the policy level, we need structural changes so that the situation is fixed for everybody else in time, as opposed to just relying on individual efforts.
- Shalini.
- Well, I'm gonna take it full circle with your 1984 reference.
Aldous Huxley, author of "Brave New World", another sci-fi writer, wrote a letter to George Orwell in 1984 and says, "I really enjoyed your book, but in the future, I don't think people are gonna be oppressed by stick and by stone, that people will learn to love their own oppressor."
And so I think that is the tricky world that we're in where we have learned to love Big Brother in that particular way and that we are freely offering our identities up to corporations and our data up to these corporations.
And you're right, in this pandemic, we've become even more hyper reliant on these technologies and we've seen Big Tech make even more money, with Jeff Bezos on track to be the world's first trillionaire, which I think should be illegal, but that is on track to happen.
So you are right.
There are some sobering facts about this and we are right now living in an age where it's like the automobile before you had seatbelt laws and a car seat for your baby, it's like cigarettes before we had labels on them and pharmaceuticals with no ingredients listed and no usage indications, and I think that's what we're living with with AI.
That being said, because it reminds me that everyday people change the world, and I'm saying that in a very pragmatic, unromantic way.
Documentaries remind me that everyone who is a superhero doesn't always wear a cape.
And I've seen that in the making of this film and I just wanna say that I've seen a recipe for social change where I really believe that we have a moonshot moment that the cement is not dry in these technologies to call for ethics in the AI technologies that will define our future.
And in the making of this film, I have seen sea change that I never thought possible with what Meredith was referring to where you have IBM essentially disrupting their whole business model, saying "We're stopping all research of facial recognition."
You have Microsoft saying, "We'll stop selling it to law enforcement."
and Amazon putting a one-year pause, which is up in a couple of months.
All of that happened in June, 2020 and I think that timeline is particularly important because I think that the recipe is that we need a brave scientists like Joy Buolamwini and Cathy O'Neil and Timnit Gebru and Deborah Raji who put their academic reputations on the line and are often dismissed or attacked by Big Tech before they're believed, but we need those brave scientists who are unencumbered by corporate interest to tell the truth, and then we need hopefully, science communicators like Meredith Broussard, I hope the film played a small role.
We've seen with COVID and with climate change, we've big problems to solve and we need literacy.
From the age children start using these techs, we should have some basic literacy around how people can question them.
But I think June, 2020 was important because the research had been out for almost two years when the Big Tech companies made this decision, and what was significant about June, 2020 is we had the largest movement for civil rights and equality that we've seen in 50 years around the unjust murder of George Floyd by law enforcement.
And what happened is people were making the connections between racially biased surveillance, invasive surveillance technology in the hands of law enforcement and the inherent value of black life and the communities that are most vulnerable to its impacts.
And to me, that is the recipe of how we make change.
We need brave scientists, we need science communicators and we've seen for the last year is that, we, the people of a democracy, this is not the time to be asleep at the wheel, that we need to participate in public policy and in governance, and that means local change like Meredith was saying.
When we make local change, when we support some of the people that are on the "Coded Bias" website to make change, we disrupt the ubiquitous stronghold that Big Tech has on our society, and we make it more inconvenient for them to have ubiquitous power.
And you saw it with Europe passing legislation, some of us as Americans got rights around Cambridge Analytica because our data transited Europe and Big Tech doesn't wanna do one policy for California and one for the 49 States.
And so the more that we can act and push for local policy, the more we can disrupt this ubiquitous techs.
So I really believe we have a moonshot moment and that the future is really a script that we're all writing together.
- Well, let's actually first read a question.
I think we'll follow on that really well.
It's from Matt Mano Bianco.
Matt says, "Toward the end of the movie, Joy and several others testified before Congress.
This seems to have created a heightened awareness of the issues and dilemmas.
Afterwards, there is a comment that there is still no US federal legislation guidelines establishing guardrails.
Can you speak to what has happened, is happening since the film was finished in the United States?"
- Yes.
Well, that was an incredible moment 'cause we saw Jim Jordan right-wing, Trump-supporting Republican from Ohio agree with left-wing AOC from Queens Democratic, and they both agreed and I saw both sides as something we don't see on television, which I saw bipartisan support for something.
And I think we lost a lot when we lost Elijah Cummings.
There was supposed to be a bill on the table out of that committee that never happened because we lost Elijah Cummings.
And so, there is some federal law, none of the advocates I've talked to have very much hope that will pass federal policy, so I think right now our biggest hope is at the state and local level.
- So, another question about what's being done, what can be done.
Mary McLaughlin from Seattle asks, "I use the dropout rebellious 'do not want to participate' strategy.
It is not that successful.
What should I do?
How do we empower ourselves democratically?"
So what can people do on an individual basis?
Meredith?
- Well, I think it's about individual action and action at the policy level.
There is the Facial Recognition and Biometric Technology Moratorium Act.
There is the Algorithmic Accountability Act.
There's legislation being proposed.
It is not necessarily passing, but it is being proposed.
So we can work to get these kinds of laws and policies passed so that, as Shalini said, the changes percolate through the system.
California has a Data Privacy Act now and all of us benefit from that because the developers are...
It's just too complicated to make something work for individual States.
So we're all benefiting from GDPR being passed in Europe, from the California Data Privacy Act being passed.
So let's pass all of these things, find the place where it feels right for you to be an activist around technology issues and just push really hard on that.
The ACLU of Massachusetts was very much responsible for getting the facial recognition ban passed in Cambridge and Somerville.
So check out the ACLU, check out all of the resources on the "Coded Bias" website and find the place that resonates for you and start speaking up.
- All right, I've got a question here from Stanley Jacuma and I wanna kind of add on to this, but Stanley says, "I've heard that companies are trying to improve facial recognition so it can recognize people even with a mask on and to also assess people's emotions.
Is this true and what are the implications?"
And what I wanna add to this is just at risk of ending this conversation on a dark note, what are the things, what are the developments... You talked about the things that are making you hopeful, what are the things that are deepening your concern?
Are there new technologies?
Are there new developments and how it's being used that you think that people should know about?
Shalini?
- Well, I'll defer to Meredith 'cause Meredith is who schooled me about so much of this technology, but I will say that, yes, these cameras are learning to identify us with our masks on and I am quite concerned and I've learned through amazing researchers like Meredith Broussard, that any technology that says it can sort of read our emotions and there's a company called HireVue that says it can judge if you're gonna be a good candidate by your facial expressions, that that's actually pseudoscience.
And so I think one of the things that I feel so grateful for to the cast of the film is giving me an education and I don't want to underestimate the power of literacy in terms of understanding.
It's only because I sat with such brilliant human beings that I am now able to say, oh, that's, pseudo-science, that's not really science.
They can't really do that stuff.
That's techno chauvinism, they're just making this stuff up.
And it's only because I educated myself that I could start to have those moments, and, I just wanna say, and I'll let Meredith talk about what she's disheartened about, but you had people like (indistinct) and I see (indistinct) of Brooklyn who didn't even know what biometric data was, and I think a lot of us feel like imposters, like we don't have a place in questioning these technologies because we don't have advanced degrees, and it's really my hope that "Coded Bias" pulls out a chair for us and lets us know we all have a place at the table because all of these technologies are deployed on all of us.
And so, it's just my hope that we'll all get literate and so we can discern for ourselves what is science and what is pseudoscience.
- I love that vision Shalini.
I would love for everybody to be so much more computationally literate and to just say no to garbage science, to junk science when it's embedded in computation.
So yes, computers are getting better at recognizing us with our masks on.
That is different than emotion detection and emotion detection is just garbage.
There is a very old pseudoscience that is also a garbage called phrenology and the Nazis were very into phrenology and what phrenology had to do with was measuring people's skulls and eye and making conjectures about their like fitness or validity or intelligence based on skull measurements.
And it's absolute garbage, it's like worse than patent medicine, it's pseudoscience.
It is not worth any kind of intellect.
It is the most antic intellectual thing you could possibly imagine, and emotion detection is computational phrenology.
Emotion detection by computers is not a real thing.
Also, the computers that claim to be able to tell if somebody is gay or straight from looking at a picture, that's also not a real thing.
There is a lot of junk science out there.
There are a lot of claims about what computers can do that are just entirely false.
So be really critical when you hear these claims in the future and know that there are always limits to what computers can do the same way that there's limits to what we can do with math and feel free to be critical, feel empowered to be critical about these things.
- Well, that seems like a good note to end on.
Shalini and Meredith, thank you so much for talking with us.
- Thank you.
- Thank you.
- Before we go, a couple of important notes.
I want to remind you that Crosscut is a nonprofit reader-supported news site that relies on the support of our community to ensure that our events and journalism remain free for everyone.
Thank you so much to everyone who donated to this event today.
If you would like to make a donation or become a member, visit us at crosscut.com/support.
Also, we hope you will join us at the next Crosscut event, the latest edition of our "Northwest Newsmakers" series, which takes place on April 7th.
We'll be discussing what unites us with Eric Liu of Citizen University.
And last, you should also check out the Crosscut Festival coming in May which features a ton of incredible speakers like Jane Goodall, Ibram Kendi and actually, a little bird just told me that this Saturday we're announcing some pretty exciting new guests.
You can learn more at Crosscut.com/Events.
Okay, thanks again to our guests tonight and to all of you for joining us.
Have a good night.
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
At-Large is a local public television program presented by Cascade PBS