Mutually Inclusive
Techno Racism: The Bias Built In - Mutually Inclusive
Season 5 Episode 4 | 26m 46sVideo has Closed Captions
We explore the intersections of race and technology in today’s digital age.
Discover the hidden side of tech this week on Mutually Inclusive. Technology is changing rapidly, allowing more advancement than ever, but not all people are impacted equally. From biased algorithms to systemic inequities in program innovation, we talk with experts in the field about how technology can perpetuate racism and what moves are being made to create a more equitable future.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Mutually Inclusive is a local public television program presented by WGVU
Mutually Inclusive
Techno Racism: The Bias Built In - Mutually Inclusive
Season 5 Episode 4 | 26m 46sVideo has Closed Captions
Discover the hidden side of tech this week on Mutually Inclusive. Technology is changing rapidly, allowing more advancement than ever, but not all people are impacted equally. From biased algorithms to systemic inequities in program innovation, we talk with experts in the field about how technology can perpetuate racism and what moves are being made to create a more equitable future.
Problems playing video? | Closed Captioning Feedback
How to Watch Mutually Inclusive
Mutually Inclusive is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship(gentle music) - The thing about AI, it has the extraordinary potential to just reimagine every system in ways that are very, very positive and in ways that can be very, very traumatic for certain groups.
Techno-racism is definitely a term and defined it simply means that we are encoding systemic racism into our technological systems because of those historical data sets that we use.
- So, we have developers that are artists and they want to build something, right, and it's beautiful for them.
But my team is like, "Okay, how can this be used negatively?
How would others use it maliciously?"
And nobody wants to really think that way, but you have to.
- Our data sets do not have the luxury of historical amnesia.
They don't forget the bad things that they were used for, and they often carry with them an extraordinary amount of trauma for Black and Brown communities, for women, for persons with disabilities.
The challenge with this technology, particularly AI and gen AI, is that an algorithm has the ability to make a decision about your life, your livelihood, and your legacy without you participating in that decision.
- On the surface, it seems like it might be moving forward, but the reality is that it's still like leaving a lot of people behind.
Certainly people in the African American community always knew that these problems existed.
It wasn't really until they exploded and become headline news that people in the other communities really saw that there was a problem, not only here in Grand Rapids, but other places too.
- I think that's a known now.
It's evident.
I think that it's complicated and we're trying to figure out how exactly we address that.
I feel like you can't change something unless you bring visibility to it.
If others can't see it, you can't change it.
(gentle music) - Technology, can it be racist?
- That's the question many are asking as developments in things like AI and facial recognition continue to make their way onto the scene and onto news headlines.
- That's right, and just as technology has developed over time, so has our relationship with it, which is why today on "Mutually Inclusive," we're looking at the intersection of race and tech, where it's been and where industry leaders say it's going.
(bright music) (upbeat music) The technological timeline has brought significant advancements, from the Stone Age to the Industrial Era, all the way to now when things like ChatGPT and facial recognition software are finding a home in our lives.
But many advancements haven't been created equally for all, and that's where tech ethicists say we find a domino effect.
Take the camera, for example.
At its onset in the 1800s, photography was primarily created with a white complexion in mind.
In the 1950s, Shirley Cards featuring white women were known as the gold standard for capturing light and color, leaving people with darker skin to fade into the background.
Kodak wouldn't debut the first multiracial film card until the 1990s.
And while film has improved, there continues to be a pattern of concern over how cameras and subsequent technology recognize people of color.
- I think my blackness is interfering with the computer's ability to follow me.
As you can see, I do this, no following.
Not really, not really following me.
- [Kylie] In 2009, tech giant HP investigated claims that its computers showed racial bias after a video of two coworkers went viral showing its webcam unable to recognize the face of a Black employee.
- I get really, really close to try to let the camera recognize me.
Not happening.
Now, my white coworker, Wanda, is about to slide in the frame.
You'll immediately see what I'm talking about.
Wanda, if you would please.
- [Wanda] Sure.
- Now, as you can see, the camera is panning to show Wanda's face.
It's following her around, but as soon as my blackness enters the frame, which I will sneak into the frame.
I'm sneaking in, I'm sneaking in.
I'm in there.
- [Wanda] That's it, it's over.
- There we go, it stopped - [Kylie] In 2015, Google Photos did pick up African American facial features, but not in the way it had hoped.
The company issued an apology after its software incorrectly identified two African Americans as gorillas.
The issue of facial recognition bias gained more notoriety in 2018, as this project by MIT researcher, Joy Buolamwini, raised questions over the accuracy of facial tracking and the way it's being used in society.
- The system I was using worked well on my lighter skinned friend's face, but when it came to detecting my face, it didn't do so well until I put on a white mask.
- [Kylie] A 2019 report that followed by the National Institute of Standards and Technology, found facial analysis software had higher rates of error and false positives when identifying Asian and African American faces.
Taking the past into account and fast forwarding to today's growing popularity of facial recognition and other AI technologies like data processing and analytics, we sought out the experts to break down the good, the bad, and the ugly regarding new tech and the possibilities of it holding racial bias.
- Thank you so much, Renee, for joining us today.
We so appreciate you.
- Well, thank you so very much for having me.
- [Kylie] Renee Cummings is an AI data and tech ethicist currently teaching at the University of Virginia's School of Data Science.
In 2023, she won the VentureBeat AI Innovator Award and hold a wide resume when it comes to the intersection of race and technology, - Much of my work is situated within the space of justice and equity, and I spend a lot of time thinking about and researching fairness.
Fairness, particularly in data, our data sets and fairness in AI.
I look at questions around algorithmic fairness and algorithmic justice, and the ways in which we need to include diverse, equitable, and more just practices in how we do technology.
It's all about inclusion.
And for us to get the best results, the most accurate results when we are using data to make decision making, we've got to ensure that that data is accurate, we've got to ensure that the processes used to handle that data are accountable, are transparent, and, of course, always explainable.
- [Kylie] But how can data, the numbers, be fair?
Or for that matter, prejudiced?
Cummings says the bias is built into our framework.
- Our data sets do not have the luxury of historical amnesia.
They don't forget the bad things that they were used for, and they often carry with them an extraordinary amount of trauma for Black and Brown communities, for women, for persons with disabilities.
- People are saying that people of color have a new enemy, which is called techno-racism.
Is that a new term?
Is the new enemy called techno-racism?
- Well, techno-racism is definitely a term and defined it simply means that we are encoding systemic racism into our technological systems because of those historical data sets that we use.
Historical data sets carry a memory, and they often carry a memory of every core decision that was made based bias, based on stereotypes, just discriminatory practices.
- [Kylie] George Bayard is the founder of the African American Museum and Archives in Grand Rapids, Michigan.
He says, much of today's issues on AI profiling can be traced back through America's history.
- I think as we look at the way Reconstruction and Jim Crow, you could only arrest African Americans if they had a criminal record.
Slavery wasn't there, so the prisons became the new slave units.
As people were arrested, they were thrown in jail for petty crimes, jaywalking, different things like that.
And over a period of time, that became the data that law enforcement used to arrest people.
And so whether the charges were trumped up or whether they were real, you had a group of people lumped into one big group of criminal backgrounds.
- I'm a criminologist, and as a criminologist I was working, of course, in the criminal justice system, working with risk assessment tools, which were automated decision-making systems and realizing that many of these tools were creating these zombie predictions.
They were overestimating the risks of Black and Brown defendants in the criminal justice system to make decisions on recidivism and public safety, on sentencing and bail.
I realized that I had to step into the space of algorithmic development and deployment and really force us to ask ourselves some critical questions about the data that we're using often so steeped in systemic racism to make future decisions on society, people's lives, and of course their legacies.
- [Kylie] Pretrial risk assessment tools have often been suggested as a more equitable alternative to cash bail.
But many states are backtracking after reports like this 2023 study by the University of Michigan show these tools, quote, "Replicate racial and socioeconomic disparities that bail reform seeks to address."
- Criminal justice system sees them because they fit a certain profile.
I mean, it's profiling basically.
I think it is based in racism in a lot of ways because they're looking at a group of people.
And I think that if you take those terms and apply them to a group of people, then that's racist, I think, or at least discriminatory, because the actual person that you're talking about might not fit into that category at all.
(gentle music) - We've also seen challenges in healthcare.
We have seen challenges in housing.
We have seen the advent of what is considered digital redlining, where algorithms are really replicating many of the biases that were deployed against communities of color from the '40s, the '50s, and the '60s when it came to getting a mortgage.
- [Kylie] In 2018, a study by UC Berkeley researchers found that mortgage algorithms show the same bias as human loan officers, both charging higher interest rates to Black and Latino borrowers.
It found that bias cost people of color up to half a billion dollars more in interest every year compared to their white counterparts.
- The challenge with this technology, particularly AI and gen AI, is that an algorithm has the ability to make a decision about your life, your livelihood, and your legacy without you participating in that decision.
An algorithm has the ability to take away agency and autonomy and self-determination and really undermine things like self-expression, freedom of movement, freedom to gather, real constitutional and civil liberties, and civil rights are at risk because of the power of an algorithm.
- [Kylie] In 2020, Michigan would be front and center in this conversation.
- [Officer] Not make a show in front of your wife and kids.
- Why you grabbing me, bro?
- [Officer] Because you are under arrest for your warrant, for your outstanding warrant.
- Can I see the warrant?
- Absolutely.
- We will show you the warrant in the car.
- Hold on.
Hold on a second.
- I'll give you a good case.
We had that case of Robert Williams of Detroit, who was misidentified and wrongly arrested because of facial recognition.
It created an extreme amount of trauma in his life and his family's life.
- Settle down.
- No, no, no, no, no, no.
We're the police.
Don't tell us settle down.
- We're not doing that.
This isn't the game.
- They made a mistake, baby, it's okay.
- [Kylie] The Detroit Police Department made headlines in 2020 when a Black resident, Robert Williams, was arrested outside his Farmington Hills home after facial recognition software falsely identified him as the culprit in a shoplifting case.
Williams spent the next 30 hours in a jail cell.
Backed by the American Civil Liberties Union, he would ultimately receive $300,000 after suing the City of Detroit.
- Robert Williams's case became almost as the test/poster case for the American Civil Liberties Union.
It is the first time there is an award of this kind or of any kind for anyone arrested by facial recognition technology.
So now the Detroit Police is using that to reimagine how they do policing.
- [Kylie] While Detroit authorities attest the problem is with the police work rather than the software, these issues didn't stop with Williams.
- [Reporter] Detroit's police chief addressing the wrongful arrest of a woman who was eight months pregnant.
- [Kylie] By early 2024, there had been seven false arrests recorded across the US due to facial recognition errors.
All subjects were Black, and three of the seven happened in Detroit.
- Certainly people in the African American community always knew that these problems existed.
It wasn't really until they exploded and become headline news that people in the other communities really saw that there was a problem.
That if it fits all of those scenarios, you might just get pulled over just because a computer is telling you this is a quote, likely suspect.
I think that that's probably one of the most dangerous things because the algorithms are biased.
You know, the people, it has to be somebody who puts the algorithm together, they put their own biases into the algorithm.
And so what you're gonna come up with more likely than not, is a false look at a community or of a person who wasn't doing anything but driving home.
And I think you see that a lot.
- Facial recognition or technologies that are kind of like used to grab, these algorithms, they just grab things.
- [Kylie] Abraham Jones is a cybersecurity expert in West Michigan.
In addition to his professional career, he also heads up the West Michigan Cybersecurity Consortium, which aims to enhance prevention, protection response, and recovery to cyber threats.
- You know, my degree is in criminal justice with a emphasis in digital forensics.
We're building algorithms, we're building systems, we're building all of this great technology that we all want to utilize, and my job is securing most of it.
So, oftentimes things are built and they just don't work as advertised.
Like if you think about it, Amber Alerts are a big thing.
Now, you can have cars that are just going down the highway and all of their license plates are scanned.
So it's great in that, okay, an Amber Alert goes off, now you're seeing all these license plates, you can just let the state troopers know where they're at and they go get the vehicle, right?
But at the same time, that same technology's kind of doing faces and other technology to identify, all right, this is a scenario that you guys should investigate.
And then why is that a scenario that you should investigate?
You know, like, okay, there's an African American male under a bridge.
We saw a backpack over here in this area.
There's things that it's gonna just kind of correlate and put together that, unfortunately, because of the way we trained these algorithms and the data we used it can have biases and it can have impacts that we're not really thinking about.
And it can actually cause us a little bit of harm, right?
Like now you're going and you're questioning somebody that has no real reason, they're just there enjoying their time or whatever they're doing.
They're outside of a 7-Eleven having a Slurpee, but now you're harassing them.
- Jones is one of the few people of color working in West Michigan's tech space.
He says these types of technological developments can serve the public in a meaningful way, but adds the industry must diversify the talent pool first.
When you look at how the industry populations correlate with kind of who is more at risk, do you feel like there's a connection?
- Absolutely.
I think that's known now and it's evident.
I think that it's complicated and we're trying to figure out how exactly we address that, but it is, at the end of the day, representation.
If you don't have that in your training algorithm or you don't have that as a person that is going to be able to give a voice to the development of your system or your algorithm, it's gonna be a gap, right?
So, we have developers that are artists and they wanna build something and it's beautiful for them, but my team is like, okay, how can this be used negatively, right, really in a different way?
And nobody wants to really think that way, but you have to.
And I think it's just that we don't have the representation when we're building these systems and applications to reflect everybody, right?
Like not every voice is at the table to kind of give their input.
- When it comes to the world of tech, do you feel like you are having those conversations with people in your inner circle day to day?
- You know, I will say that I think of late in the last few years, and it probably correlates with the George Floyd event, but now that people are really kind of considering racism, as you would call it, in tech and just in society and in general, I think it is, it's something that now kind of comes up.
So, yes, those discussions are being had, it's just I think comes down to, this is my opinion, but like resourcing and a willingness to make about that change, right?
Like, everybody knows it needs to, but people are still gonna buy these systems and they're gonna be like, okay, we're doing this.
And, in my opinion again, like I think that it's key that we don't just say, "Well, it's good enough."
And, you know, "Okay, so we're gonna harass a few African Americans."
Like, that's not appropriate answer.
That's not something we should be okay with.
One of the areas that we have to be careful of is that you can't have the minorities, like you just be like, "Here, can you answer these questions for me?
Teach me about this."
Like, you've gotta do the work as well.
And it really is about doing the work.
Like a lot of people aren't okay talking about it.
And I feel like you can't change something unless you bring visibility to it.
If others can't see it, you can't change it.
- [Kylie] Jones says it's not just about bringing minority groups into tech spaces, but creating ways for them to flourish.
A 2023 study by the McKinsey Institute for Black Economic Mobility found Black employees only make up 8% of US tech jobs.
And the further you go up the corporate ladder, those numbers dwindle.
- You have to put a community together for people to feel welcomed or else why would I wanna go feel uncomfortable all day?
I mean, it's probably illegal to just be like, "We're gonna hire a hundred Black folks today."
But, you know, just saying, "Okay, we are doing diversification in our processes."
It's about having larger pools of people.
So a white male wins the job, that's fine.
But if the pool was three and they were the same person, that's not the same as a pool of 15 and it being very diverse, being like, okay, now you're hearing different voices.
Even as a hiring manager, you're hearing different voices, you're hearing different perspectives.
There's a whole bunch here in just West Michigan, the ISSA and, you know, the Cloud Security Alliance, a lot of our groups that we're working together.
And it is, it is a focus to say we need to diversify the talent pipeline.
How do we work with all the different universities?
Most every university has a cybersecurity program now.
GVSU's is one of the newer ones, but Davenport, Ferris State, GRCC.
And so we're just, we're just doing our part, I guess, to really bring about change.
- We have seen our challenges with this technology when it comes to hiring practices, not only against Black and Brown people, but against women, against persons with disabilities.
So we have seen across the board where this technology can do some very troubling things, but we've also seen the advancement in healthcare.
We've seen the advancement in our communication technology.
We've seen advancement just in every industry, every discipline.
The thing about AI, it it has the extraordinary potential to just reimagine every system in ways that are very, very positive and in ways that can be very, very traumatic for certain groups.
So if we wanna get this technology right and we wanna ensure that everyone is able to benefit from the beauty of this technology, then we've got to get the ethics right.
We've got to do it responsibly.
We've got to pay attention to safety and security concerns.
We've got to pay a attention to questions around data privacy.
We've gotta look at governance, data governance.
- [Kylie] While both Cummings and Jones advocate for reform in the way technology is developed and introduced, they're quick to add their support to the concept of this new digital age, saying it's something that can be groundbreaking if everyone is along for the ride.
- Honestly, it really comes down to representation, inclusion, and the right voices at the table.
And then those that are decision makers being aware and actually doing the work to support teams and change.
- AI is a brand new world, an exciting new world, a great new world, and really I think at the moment it's AI's big moment.
And one of the things that I always say to everyone is, you know, AI is the language of the now.
It's the language that we are communicating in.
And for you to exist in the future, you've got to be conversant and literate in this technology called artificial intelligence.
(gentle music) - [Narrator] Thanks for watching.
You can find this episode and others online at wgvu.org/mutuallyinclusive or by visiting our YouTube page.
But don't forget to follow WGVU on Facebook.
Thank you for helping us be "Mutually Inclusive."
(gentle music) (gentle music continues)
Support for PBS provided by:
Mutually Inclusive is a local public television program presented by WGVU