At Howard
Dr. Safiya Noble – Diversity, Equity & Inclusion
Season 11 Episode 3 | 52m 48sVideo has Closed Captions
Dr. Safiya Noble, Prof Gender/African Amer. Studies, UCLA talks diversity / inclusion.
Dr. Safiya Umoja Noble, Professor of Gender Studies and African American Studies, Faculty Director for the Center on Race & Digital Justice and Co-Director for the Minderoo Initiative on Tech & Power at the UCLA Center for Critical Internet Inquiry talks with Dr. Anthony Wutoh, Provost and Chief Academic Officer, Howard University about diversity and inclusion.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
At Howard is a local public television program presented by WHUT
At Howard
Dr. Safiya Noble – Diversity, Equity & Inclusion
Season 11 Episode 3 | 52m 48sVideo has Closed Captions
Dr. Safiya Umoja Noble, Professor of Gender Studies and African American Studies, Faculty Director for the Center on Race & Digital Justice and Co-Director for the Minderoo Initiative on Tech & Power at the UCLA Center for Critical Internet Inquiry talks with Dr. Anthony Wutoh, Provost and Chief Academic Officer, Howard University about diversity and inclusion.
Problems playing video? | Closed Captioning Feedback
How to Watch At Howard
At Howard is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> Hello.
I'm Dr. Ben Vinson III, the 18th president of Howard University.
And it is my pleasure to welcome you to this program, one of many we plan to bring to you as part of the "At Howard" series.
Howard University has the distinct pleasure of being the only HBCU to hold the license of a public-television station across the country.
This special relationship allows WHUT to have unique access to the breadth and depth of academic content that is being produced on our campus, from stimulating lecture series and panel discussions on a wide range of topics, to one-on-one conversations with captains of industry and international leaders of business, politics and the arts.
From time to time, WHUT will broadcast some of that content in the form of full programs to short excerpts that we believe will surely stimulate and engage you.
So, sit back and enjoy.
We're proud to share with you some of what makes Howard University so special.
♪♪ ♪♪ ♪♪ ♪♪ >> Welcome to the Inclusive Growth and Racial Equity Thought Leadership Lecture series.
And also welcome to the midweek point of Data Science for Social Justice Week.
Raise your hand if you've heard about the data science program.
Raise your hand.
All right.
Okay.
All right.
So for undergrad students, we have a master's degree data-science program here at Howard University.
My name is Dr. Amy Yeboah Quarkume.
I'm the director of data-science program.
And in the room we have some of our students who are virtually online because our program is a virtual program.
We have students from the continent in Africa.
We have students in the Caribbean joining us.
So, give them a wave.
Well, kind of behind you.
Give a wave to our virtual students in our program.
Our program has 40, wonderful students who are doing great work.
That number 40 represents a dynamic number because our students are working on projects like Dorothy working a project on black women and breast cancer.
We have students like E.K., who's working on black wealth and reparations.
we have a student known as Laquita, who's working on a project on climate change.
Right?
So, we have great students who are doing great work in our program.
And in our first year of our program, we're excited to have our second inaugural Data Science for Social Justice Week.
And this week we're highlighting the work of Dr. Safiya Noble.
Can I have a round of applause?
Great place to clap.
Before I go into the introductions from Dr. Major, this morning, everyone kind of see the warning, and they left the warning.
And for me, these warnings at Howard are a bit triggering because I remember November 15th, 2015, when we had a -- well, for me my first warning of an on-campus death threat.
I remember that morning vividly.
And that woman was triggered from an issue on toxic social media and Dylann Roof, right?
And that conversation around the impact of social media and algorithms and the impact it has on our death and on our life as Howard students and Howard faculty and staff is something I'm going to talk about today, how we should think about even after 2015. two or three years after Howard faced many other threats -- right?
-- that put many students and ourselves at risk.
But for our mental standpoint, it was definitely very challenging to move forward from those points.
So today we're talking about the intersections of data science, social media, Internet engines, and how we at Howard are creating a space where students from political science -- raise your hand, political science.
Sociology, raise your hand.
English, history, computer science are coming together to talk about data and the power data has not just at here, Howard, but across the globe.
but without further ado, I pass it to Dr. Major.
[ Applause ] >> Good morning.
My name is Dr. Monique Major and I'm from the Department of Psychology.
Hi, Psychology.
We're here in...which I love.
So I'm here to introduce Dr. Wutoh and Dr. Noble.
Dr. Anthony Wutoh is the provost and chief academic officer of Howard University.
He previously served in various roles at the university, including dean of the College of Pharmacy and assistant provost for International Programs.
Dr. Wutoh has also served as director for the center for Minority Health Services Research in the Center of Excellence.
He has a bachelor's degree from the University of Maryland, Baltimore County.
He also has a doctorate degree in pharmacy administration at the University of Maryland, Baltimore School of Pharmacy.
Now, I know how to follow directions.
And so doctor, Dr. Wutoh said, "Keep it short."
So we're going to stop right there.
But just know that he's very accomplished.
He's done research and collected data all around the world, traveled around the world, and has received funding from institutions all around the world, as well.
So we're looking forward to hearing Dr. Wutoh as he speaks with Dr. Noble, Dr. Safiya Noble is a 2021 MacArthur fellow, a recipient of the inaugural NAACP-Archewell Digital Civil Rights Award, and author of the highly acclaimed "Algorithms of Oppression: How Search Engines Reinforce Racism."
She is an Internet study scholar and David O. Sears Presidential Endowed Chair of Social Sciences and professor of Gender Studies and African-American Studies at the University of California, Los Angeles, where she serves as a faculty director for the Center on Race and Digital Justice and the co-director of the Minderoo Initiative on Tech and Power, and where she has co-founded the UCLA Center for Critical Internet Inquiry.
>> I know in our previous conversation that you are a member of Alpha Kappa Alpha Sorority Incorporated.
>> I am.
Thank you.
>> And so I'll let you shout out.
>> Very excited to be here.
Thank you.
It's really an honor to be at Howard.
And I got this beautiful tour this morning from one of my soros, and just, you know, to see the life and energy at Howard.
I mean, I went to a predominantly white institution, so there's nothing like this kind of energy and experience, even on a cold day.
that my family and I got to experience today.
So, it's really an honor to be here.
>> And certainly is an honor to have you, as well.
Our students made be aware that you are the recipient of a MacArthur Award, otherwise known as a genius grant.
>> It's a lot of pressure.
>> A lot of pressure.
And so, why don't you share with us a little bit about your story?
How did you enter into this journey, and how did you end up in this place in terms of your areas of interest and expertise?
>> Yeah.
Thank you.
It's been a long journey.
I will say that I, you know, let me start kind of at the beginning, though.
I grew up in a place called Fresno, California, which is about halfway between Oakland and LA in California, and then grew up kind of my young adult years, most of your ages -- the young folks it seems are sitting down here in front, in Oakland.
And my first career really was in advertising and marketing.
I worked in multicultural and kind of urban marketing.
So many of you know that before we had things like Internet influencers, we had people who were considered kind of tastemakers or trendsetters, and I worked with those kinds of folks to accelerate uptake of products by black people and urban consumers.
And it was kind of in that work that I really understood what companies were doing in relationship to black people, black communities, black consumers.
Many, many brands, as you can imagine, are negligent when it comes to black people.
Some of them are hyper-interested and borderline exploitive, we could say, too.
So I kind of was interested in how companies with so much power yielded and wielded that power toward us.
And when the recession hit and the mortgage crisis hit in 2006, black people started feeling it then.
And it really kind of was captured as a 2008 mortgage crisis.
But a lot of people of color were really feeling the beginnings of the economic downturn.
That was the time I went back to graduate school and got a PhD.
My husband and my son and I, they're here with me today.
we were in Champaign=Urbana, and I went to the University of Illinois at Urbana-Champaign, and I was so fascinated by the way that people in academia were talking about the Internet and tech companies and Silicon Valley, because we just left the Bay area, and I had lived through the first dotcom boom and bust.
And it was so different the way people in the university were talking about one company in particular, which was the one that fascinated me the most, which was Google.
Google had started to become a household word.
It was part of our lexicon.
"Just Google it."
Everybody was talking like that.
I'm sure many people still are using Google as a part of their everyday experience as a noun.
When I would teach my students and I would start to reveal some of the research that I was doing with my undergrads when I was a grad student, they would become hostile because it was like I was talking about their mama.
You know, they were like, "Don't you talk about Google."
And so I found this fascinating that one company in the tech sector kind of had to hold on the hearts and minds of people and was such a profound part of the zeitgeist.
And many people were thinking about Google as kind of the new public library, or they were thinking of it as kind of a replacement for teachers or experts that, you know, you could just Google it.
You don't need to go learn at an in-depth way about any kind of subject, because you could just be reliant upon this technology.
And I knew, having just left the ad business, that that was off.
We were paying, we were spending thousands and thousands of dollars to game these systems, these search engines, to get our clients placed higher on the first page of results.
I understood this was before we had what we call SEO as an industry or search- engine-optimization industry.
This was the nascent years of that.
And so I thought, well, what are the implications?
If the public is using search like a reliable, vetted, credible resource, and I knew we had just been spending thousands and companies were spending millions of dollars to manipulate what was happening in those technologies, how's that going to fare for us?
And that really was kind of my driving question.
So I did an early study.
I don't know, it's got to be close to 15 years ago now where I took the U.S. census and the racial and ethnic categories.
And I combine those with gender categories.
And I look to see what were the results.
And this really was kind of, I've almost seen this this study fictionalized in television shows.
Now I hear people talk about it.
Um, but I, I did these searches on black women and girls, Latina girls, Asian girls, Asian women.
And over and over again, black women and girls were, uh, overrepresented.
Over 80% of the search results were pornography or sexual, uh, commodified kinds of images or websites.
Now, you didn't have to add the word sex.
You didn't have to add the word porn.
Black girls and women were synonymous with pornography.
And that really was.
That study was a way for me to show the politics and the power and the implications of how these systems are programmed.
What kind of data are they programmed on?
What are the implications of the kind of lack of questioning that the public does about what they find?
Um, how would black women and girls in particular ever affect or change that?
I mean, when we're looking for that, we're not typically looking for porn when we're using these sites as black women.
What were the implications of this for my daughter?
For my son.
So, uh, that work, that early work really was a claim in fields of information science and computer science, where, uh, and sociology, communications, media studies that algorithms could be programmed in racist and sexist ways, in discriminatory ways, and that the data and that the projects of, uh, technical systems like this was social and political.
Now, this was heretical at the time to make that claim, 15 years ago.
I would go into computer science circles and, uh, you know, men would shout at me and would say, absolutely not.
This is impossible.
Algorithms aren't political.
They're not social.
They're just math.
>> So let me interrupt you briefly.
>> Yes.
>> Because one of the thoughts that went through my mind as you're describing that early work is, was this intentional, this bias that you that you were seeing in the algorithms?
Was it intentional?
Was it a byproduct of the experiences of the developers of the algorithm?
And just any thoughts you had about that?
>> So.
I'd like to think of the way we assess these systems as really, uh, about the disparate impact, irrespective of the intention of the programmer.
I think after training computer scientists and people who work in tech and engineering for more than a decade, that there's a high degree of ignorance and lack of knowledge about society.
Quite frankly, if you think about at a place like UCLA or Illinois, where I've taught, many of the engineering students AP tested out of all their humanities courses.
So most of them are kind of walking around on a 12th grade humanities education.
Uh, they don't take any sociology, ethnic studies, gender studies, any of these kinds of social science courses because they have a jam packed computer science curriculum that they have to get through.
There might be, if they're lucky, one course on ethics, and it's going to be woeful at best.
So I think the intent is really less of the question.
It's more so the lack of knowledge and the ability to see and test for different kinds of outcomes.
And, um, and, you know, the impact of their work on society is so profound with really almost no regulatory oversight.
I mean, no one's checking on the work until we experience the negative effects of these different kinds of systems.
And of course, the claims that algorithms or AI could be harmful is, you know, that claim has been like pushing a boulder up a mountain.
I mean, it's really been difficult.
I would say the majority of people who've led that conversation in the world, in the English speaking world, let's say, have been, uh, black people, black women, LGBTQ people who have been asking different questions of these systems than the makers of the systems.
And of course, like the I think of black women in particular in the field that I'm in, as like canaries in the coal mine.
I mean, we just see things differently.
We're asking different questions.
We're seeing the effects, uh, of a variety of different kinds of predictive systems.
What these systems are trying to do is they're trying to predict what they think a user is looking for.
So they're using historical data.
We're going to talk about this, I hope, as we go on and maybe talk about newer technologies like generative AI.
Um, they're sucking in all the things we know about society that are used as the training data in search, which is the precursor for things like ChatGPT.
And, uh, there isn't a lot of ability to think critically.
I mean, the majority of people in those early days when I would show these results at conferences, they would say, well, this is just what people are looking for.
This is the public.
This is the public's fault.
This isn't the fault of the makers of the technology.
Well, we know that's not true.
Um, and then we have to ask ourselves as well, which users are being prioritized?
Who is the imagined user that's looking for porn for black women?
About black women and girls?
It's that imagined user is probably not a 15 year old black girl.
So the question then is who are the imagined users in the programmers minds?
Um, what is what is it about the naturalisation of this racist and sexist stereotype of the hypersexualized black woman or girl?
We know that this racist and sexist stereotype is its genesis, its origin story is in, uh, the end of the transatlantic slave trade, when the only way you could reproduce the enslaved labor force in the Americas was to force black women and girls to have many children who were born into bondage.
So a stereotype emerges that black women and girls want to have sex more than anyone else, that we want to have a lot of babies.
This, its contemporary version, is the welfare queen, right?
That's the kind of genesis Melissa Harris-Perry has written all about.
This is lots of great black feminist work around this bell hooks.
So it's important to understand the naturalization of stereotypes, which then gets turned into things we call data.
A data is a word that flattens these historical experiences, and then it gets naturalized and normalized, spit back to us in something like a search engine, becomes very difficult for people to understand what just happened >> So we're talking about algorithmic discrimination.
Um, for, uh, I'm a pharmacist by profession and, um, don't have a lot of experience in tech.
And I know we have a lot of students here who aren't necessarily from a technology background.
In lay terms, what is an algorithm and how do we how do we get to the point that what is the impact of algorithmic discrimination?
>> So an algorithm is really a set of instructions that you give to a computer for it to compute.
And typically in the most basic way of thinking about it, it would be instructions that to program a computer to, let's say, work through a decision tree.
If these conditions then those conditions.
Right, or these types of outputs.
Uh, it's often, um, working across different kinds of data.
An algorithm might be applied to different types of, again, data.
I always like to put data in quote figures, which might make some of my colleagues in data science frustrated.
But I want to I want to do that so that we again, keep making this word a complex word.
Um, an algorithm might be, uh, very, very sophisticated and be so complex that even the makers of the algorithm don't quite know how it works.
So in the case of search, uh, you know, the search algorithms for big tech companies are so complex, hundreds if not thousands of people have worked on them over time.
It's not like an old school car where you can just, like, pull up the hood and you're like, tinker with the carburetor.
Uh, it's a system where you can't even figure out what piece of code is broken or why we're getting particular outputs.
And this is one of the reasons why, when, you know, now fast forward, uh, when you do searches on black girls or different kinds of things that I've talked about in the book, most of those things have just been down ranked, because they really don't know how to fix the system.
You know, they have to, so they have to manually go in.
And this happens all the time, all over the world with different kinds of platforms where they don't really know why the system does what it does and gives us certain types of outputs.
Um, part of the reason why algorithms, I think can be discriminatory is because they're often trained on data sets that are also full of discrimination >> And what does that mean?
>> Okay, so let's say we have, I'm going to now, I'm going to try to go over into your field.
I may be unsuccessful.
So you help me.
If you have health care data and you have 100 years of health care data, and you train your algorithm to predict who's most likely to, let's say, um, develop cancer, you're going to, um, have a very accurate, probably, algorithmic output.
You're going to have output that's going to probably identify white men with money who've had the best health care.
>> Based on the data.
>> Because those are the people who are going to be in the data set, right?
And all of the, uh, black people who didn't make it into the data set, therefore, are not going to get predicted into your your output as people who should be pre-screened.
So these are the kinds of just very simple ways that, uh, algorithmic discrimination happens, which is you have an algorithm that's looking for a, a set of scenarios or outcomes.
And it is, uh, looking at data that is going to be fraught with all kinds of historical patterns.
>> So maybe one way to think about it is that the algorithm is the starting point, but you have to feed information into the algorithm for it to improve its prediction?
>> Yeah, it's looking for patterns.
Right.
So typically most of the kinds of algorithms and AI that we experience today are looking, you know, for, for patterns and in large scale, um, machine learning kind of algorithms, there might be so much data that's disconnected and not in a neat, uh, data set that it's also looking for patterns that our own human mind would have a hard time finding those kinds of patterns because it might be millions of data points.
Right?
So it would be very difficult for even large teams of researchers to do that.
So we use these kind of machines, these computing methods to help us look for patterns.
But the patterns, again are going to not always tell the full story.
Um, let me give you another example.
When Amazon was building an HR software tool because they wanted to, they get tens of thousands of resumes every day at Amazon.
And obviously one person or an HR department can't go through 10,000 resumes a day.
So they built their own custom, um, AI that would scan resumes and then help them find good employees.
So guess what the algorithm that kind of the output was.
It screened out 100% of the women.
100% of the women were screened out because the algorithm was looking on certain types of keywords, and also of certain keywords were present on a resume.
Out, you were put out.
So think about how many women put that they were like, you know, on the women's, uh, gymnastics team or, uh, where a member of a sorority or any kind of thing that was gender identifying, um, that might be coded negatively.
Right?
Because in their historic data about who's been successful at Amazon, who's been promoted more, who's more likely to go into management, guess what?
It's been men.
So these kinds of things are really important.
And you have HR screeners now.
You've been used all over a corporate America.
Every industry is using these, universities use them.
Um, and it's difficult to understand.
Of course Amazon scrapped that software immediately.
Obviously they knew this was a problem.
But when you just go, you know, one level further, you start to see screeners or predictive systems that are going to skew toward a set of results.
And if you're not looking at the impact of what those results are, it will be very difficult for you to even know that there's discrimination happening.
>> And we were discussing this earlier and I had mentioned that I'm a baker.
>> You did.
>> I bake as a hobby.
And I was trying to use chocolate chip cookies as an example.
And really my question was if the initial algorithm is biased to start or it's discriminatory to start, is there a way to fix that along the way?
So my analogy was if I'm baking chocolate chip cookies and I forget to put chocolate chips in the mix and put them in the oven, I can't then add chocolate chips once I've taken them out of that.
Well, I could um, but they're not going to be incorporated into the cookie.
And is that a reasonable analogy?
Can you fix an algorithm that is biased or discriminatory in its construct, so that at some point, regardless of how much data you or how you train it, can you, um, adjust for that discriminatory bias?
>> This is a complex question.
And the reason why is because I think there are a lot of different points of view right now in the field about how to mitigate or solve these problems.
So.
Your analogy is really interesting of, you know, the recipe, the recipe calls for certain things.
You have those inputs and you're going to get a particular kind of output.
You could really think of algorithms working in those ways.
The challenge here is that, uh, there are some computer scientists and people, even in my field in information science who would say, well, we could see that we didn't get enough, um, of the right kind of output, and we try to solve after the fact.
There are people -- >> That also could be used as a justification.
We use this very extensive, comprehensive computer database and this is what we got.
>> Yes.
>> And that can be used as a justification as opposed to suggesting there may be something wrong with the algorithm.
>> It's difficult because in the case of large, uh, companies and small, many times the algorithm or the AI is proprietary.
So it's very difficult to know what's happening if you're working in a proprietary kind of system.
So all you can study for the people interested in the outcomes, we look at what we got.
Did we get enough chocolate chips in there or not?
Were they the right kind or not?
I think there are people who also would argue that, um, you could change up the recipe and try to make it better.
Uh, so there are people who work in a field called, um, algorithmic fairness or ethical AI, and they're very interested in these questions of fairness and how do you, let's say, bring in more data of an underrepresented group so that you can kind of mitigate?
And this is something statisticians do all the time.
I mean, these are also kind of like statistical systems.
Um, there are others of us I would put myself in this as category who would say, is it appropriate even to use this kind of modeling for this problem?
Maybe that's not the appropriate, appropriate approach.
Um, maybe we don't even want that.
One of the best examples of this, just to show the continuum would be in facial recognition.
So facial recognition, you know that the most famous study about, uh, discrimination in facial recognition was by my colleagues, Joy Buolamwini and Timnit Gebru, Deb Raji, who did this study called Gender Shades.
They're all three black women.
Uh, Joy was a student at MIT.
She has a new book out called, um, "Unmasking AI."
"AI Unmasked"?
"Unmasking AI."
Um, she, uh, was studying facial recognition when she was a PhD student at MIT.
The facial recognition would not recognize her face.
She took a white, like, theater mask.
Like.
Like like there's my mask.
She put it on her face, held it up.
Facial recognition worked perfectly.
They wrote this paper.
It's a famous paper now called Gender Shades, about how facial recognition systems don't recognize black faces.
They're worst on black women's faces.
They work best on white men's faces.
Now, you take a study like that, and there are people who use a study and say, well, let's build better facial recognition systems that see black women's faces and people of color, our faces better, and that's the solution.
That's a fairness question.
Make the algorithm more fair, so it fairly represents, uh, everyone's faces.
Then there are others of us on, let's say on the other end of the spectrum who are like, we don't actually need facial recognition technology at all.
There's no application of facial recognition technology where there isn't, um, a likelihood of harm or discrimination or surveillance or control that is going to be tied to it.
So what we want is to not be recognized.
We want less recognition by these systems.
And of course, there are movements in Detroit, San Francisco, other places, New York, to ban facial recognition systems.
Right, because they're most likely to be used, let's say, in policing scenarios, in border crossing scenarios and retail scenarios, places where we're already hyper profiled.
So, um, it's contextual when we're talking about fairness.
What would be more fair from that vantage point would be no facial recognition software, making that illegal.
So this is why I say it's a complex, it's a complex situation.
What is fairness in any given scenario with any given technology?
>> You alluded to this.
Um, but I'll raise the point again.
Who is policing these systems?
Who is really monitoring to limit as much as possible bias and algorithmic discrimination?
Is this sort of an industry, um, responsibility, or who who is monitoring these processes now?
And what are your thoughts about what we should be doing?
>> Simone Browne, who wrote this incredible book called "Dark Matters" says her handle on Twitter and social media is wewatchwatchers.
And I would say that it's people like us, you know, scholars, activists, journalists who are watching, who have been kind of in the forefront of naming the problems, uh, which is deeply unsatisfactory.
I mean, for us, most of us are public servants.
We you know, I work for UCLA.
I work for the state of California.
I'm not really paid to be a watchdog on the tech industry, even though that is what my research is about.
It's, uh, you know, it's like a drop in the bucket compared to what's rolling out every day, every week, every month.
So we don't really have in the US a robust regulatory environment yet that's developing.
I would say the two most important agencies right now are the Federal Trade Commission, led by Lina Khan, who, if you don't know, is a law professor, before, uh, President Biden named her to chair the FTC, and she wrote the most important paper about why we should break up Amazon as a monopoly.
Uh, so you know what her orientation is at the FTC.
She's like, I'm coming for you, big tech.
And that's important because the FTC puts the guardrails in place for the public, for consumers, that's their orientation.
If there are harmful consumer products, they're on it.
Uh, that's difficult.
Obviously, you're in D.C. You know better than I do how difficult it is to do things in D.C. legislatively and the regulatory and regulatory agency.
I would say the other agency that's very important right now is a Consumer Financial Protection Bureau, uh, led directed by Rohit Chopra, who was formerly of the FTC, an amazing, incredible advocate for communities and for disenfranchised people.
And he has really led the charge around things like investigations of data brokers.
Data brokers are companies who take all the information that can be gathered about us.
Everything you're doing on your phone, everything, everywhere you are.
That's a big market that we're information about us, data about us is sold 24 by 7, thousands of data brokers.
They are building complex data profiles on all of us.
And the CFPB is trying to really crack down on that as well and make that illegal.
In California, we have the consumer California consumer, um, the, uh, uh, Privacy Protection Agency.
So at state levels, we're starting to see some traction and action around regulation.
But I would say we're nowhere near Europe.
Um, with the kind of EU and GDPR and other kinds of policies.
And the frameworks are limited also around regulation because regulation in most countries is organized around individual rights.
But, you know, we understand collective and community rights.
So it's very difficult if you're, um, discriminated against by some type of AI and you have to litigate that or take a company to court or, you know, try to have like civil claims.
We don't really have criminal statutes yet.
So we need Congress to pass legislation on, um, uh, the environment's difficult, but what what about when we're like, collectively as a community, uh, experiencing some type of disparate impact that's much more difficult?
Um, we don't really have the right laws in place yet for those protections.
So what happens is, like, we know working people, poor people, middle class people cannot afford, no one here is going to sue Google.
Okay.
That's not going to happen because we don't have not successfully because we don't have the resources and the armies of lawyers that we would need to win.
Um, what we've really been reliant upon have been, uh, very famous people suing companies and trying to get some type of, uh, statute on the books that then maybe in ten years we'll be able to use, uh, to our benefit.
But I think that is not really a sufficient way.
It's reactive.
It puts us on our back foot.
What we need is proactive frameworks of protection like we do, let's say, with environmental protection.
You know, none of us should have to litigate to get clean air or clean water.
That should just be a right, a human right, a civil right that we get to experience and enjoy because those protections have been put in place.
And that's really, to me, a much healthier and more robust way to think about regulating.
>> The underlying technologies behind artificial intelligence, especially over the last year, seem to be advancing rapidly.
Um, is the genie out of the bottle?
Is it too late at this point to really put in place frameworks that protect against bias and that protect against algorithmic discrimination, or do we still have time to really try to, to manage, um, the genie?
>> Well, I try to think at the level of paradigm and because that's really where my work, it's like kind of like shifting the way we thought about it before and how we'll think about it going forward.
Uh, you know, before algorithms were just math and then shifting that to like, know, algorithms or political and social.
Right.
So is the genie out of the bottle?
Well, really, uh, the technologies are here.
Certainly.
What can we know about them that might shift the way we see them?
Uh, I'll give you an example that we all can kind of understand.
So when the technology of the automobile was introduced, there was, uh, I mean, this was incredible, especially for black people, like our ability to move prior to the 1950s, when automobiles started to become mainstream, was in the 1960s was really difficult.
And we couldn't move everywhere either.
Not only was it just not safe, but it there were constraints.
And then you have the automobile, and you can like pile up in the car.
And inside that space there's some safety and there's all kinds of new freedoms.
And then we build different kinds of networks.
You know, the green bug.
We like all these different ways of, um, taking advantage of a technology like that.
And yet inside a sense of freedom that we think we have with the automobile, we have all these other secondary and tertiary effects that might not be obvious to us, like climate change.
We're dealing right now with the you know, we're in the generation of dealing seriously with climate change from oil and gas and what, um, automobiles and kind of corporate pollution have done.
We have -- We would not have been able to have automobiles around the world, and buses and all this kind of transportation depend upon it without the incredible extraction of rubber right, from, um, uh, you know, the Belgians were like vicious, um, uh, colonizers.
And what happened to people in order to, um, export rubber?
Um.
So we have a Pan-Africanist way of thinking about the automobile now, if we think about our relationship to the continent and of course, then to the waste, what happens?
We know when there's waste in the world that goes back to our people, our communities, all over the US and abroad.
So I think of the technology moment that we're in right now around digital technologies, network technologies, as being similar in that there are certain affordances and freedoms that people are feeling and experiencing with them.
And there are also these other costs that are hidden from view on purpose.
All right.
So our job, especially here, is to think about how we would do these things differently.
You know, so I tell my students all the time like they're like Dr. Noble I made this new app.
Will you try it?
And I'm like, I don't want to hear about the app.
You know?
I'm like, I want you to bring me back a phone that when we're done with it, we can eat it.
All right?
Like plant a tree, you know, I mean, put it in the ground and something grow from it.
Like, really, we got to get a totally different paradigm about this so that the network devices, the internet of things, all the things that we're dealing with aren't dependent on, um, mining minerals from the Congo and extractive mineral industries and e-waste being shipped to Ghana and ruining pristine, beautiful wetlands in the heart of the city and Accra.
We have to actually think, and this is where I think black people, we have the superpower of thinking in these ways about an ecosystem and our relationship to other people in the world, and then we can innovate differently.
And to me, that is a huge opportunity that, uh, our counterparts are not thinking about.
>> Let's shift the conversation, okay?
It's an election year.
And, you know, we hear a number -- >> I thought it was already depressing.
And now you're taking it there.
So here we go.
>> What is being done that we either may not be aware of or we hear snippets of things, what is your concern regarding artificial intelligence, particularly as it relates to elections?
Uh, in this country or in other countries?
What things sort of stand out to you that we need to be thinking about?
We need to be concerned about already being utilized that that we just may not even have awareness of.
>> Okay, well, we know that Facebook has ruined electoral politics in most liberal Western democracies, including this one in the United States, we see a huge rise in authoritarians and authoritarian politics around the world that is mostly facilitated by large scale, uh, social media platforms.
Uh, that is getting worse.
Facebook two weeks ago, I think now, declared, this last week or a week before that they are going to ban, um, political speech as a way of supporting democracy.
This is nothing could be further from the truth, because what will happen now is that journalists will not be able to report and have their material seen.
So it can be posted, but it won't be amplified algorithmically.
We have lots of evidence.
January 6th, um, uh, you know, white supremacist uprising against the United States government.
The backlash against the President Barack Obama and the election of Donald Trump.
Uh, we have, uh, a huge industry of bots and other kinds of interference from other parts of the world in US politics, many times operating under the guise of being black people, um, and black women.
You know, the work of Shireen Mitchell, who was the first person who really recognized these kind of, uh, fake accounts that were pretending like they were black women, you know, like using AAVE and trying to, uh, engage in politics and really are just straight up voter suppression kinds of accounts.
Um, also pretending to be black people and black women to, uh, whip up white nationalists and white supremacists in the United States and of course, that's because black women are the most reliable voter demographic in the United States, um, in terms of, uh, Democratic politics.
So I think we should expect that is going to intensify.
It's being refined with these new policies coming out of Facebook, which you want to remember, 80% of the user base of Facebook is outside of the United States.
And most of the people, especially on the continent and other places that are using Facebook, that is their experience of the internet inside the closed ecosystem of Facebook.
That is what they experience as the internet.
So I think we are going to, um, have a lot of tough work ahead of us in this next presidential election.
That is the most polite way I can say it, because really, I am not sleeping at night.
I'm asking my husband, like, where are we moving?
Are we moving?
Are really?
Is everyone leaving?
Where's everyone going?
Um.
If we're staying, where are we saying, what are we doing?
Everybody moved to California because I feel like we could go it alone if it really got, you know, hectic, um, maybe D.C.
So, uh, these are the some of the challenges.
I think we should be expecting more voter suppression.
Listen, when you see rappers and you see artists who are like, hey, you know, um, all the candidates are terrible choices.
I'm just going to sit it out.
That's a voter suppression tactic, okay?
Just straight up on its face, like that is what is going on.
And so we should expect to see a lot of that in this next election.
And this next election, I think is extremely consequential for us.
We've already seen a massive rollback of civil rights legislation in our lifetime, which is just stunning.
Um, and a Supreme Court that has moved to the far right, and I think we should expect to see more of that if we do not get involved.
The only thing that is really giving me life right now is seeing the Gen Zs that are running for Congress, because you all keep running for Congress, like, you know, like just getting out there and really building some type of progressive caucus.
I think it's super important.
I'm saying this now, not as a professor.
I'm just saying this me unplugged with just what I, what I feel is happening.
I think it's really important.
It's easy to give up on electoral politics.
When I was a young person, I did not believe in electoral politics.
I was really like, just like a scam, you know?
It's just not, it's not doing what it needs to do for us.
Um, it's not doing what it needs to do for us.
It's going to do even worse.
It's going to do even less.
You know, it'll actually become extremely harmful, like, like explicitly damaging and targeting.
Um, and I think that, uh, you know, we should think of that as one of many fronts that we need to make sure we shore up and ensuring that we don't have, um, you know, a fascist takeover of the US government.
>> What are algorithmic election systems, if you can?
>> Oh, my gosh.
Well, I was recently, this is what I do in my spare time.
I watch old, uh, Davos talks from tech leaders, which is the most hideous thing you can do.
So don't do that.
But I'm here to report out that, uh, in 2017, the one of the last ones I watched, um, Sergey Brin, who's a co-founder of Google Group, was being interviewed at Davos on the main stage, and he was asked about the future of elections.
And you know what he said?
He said, in the future we will have so much information about people, we can predict what kind of candidates, which candidates they would vote for anyway, so we won't need elections.
>> I've seen a movie like that.
>> I know.
That's the future they imagine for us.
And he's not wrong.
They do have that much information about us.
In fact, people, um, when they ask people about their search results, would they want anyone to know about their search results, like, you know, 90% of the public is like, absolutely not, do not tell anybody what I'm searching for.
Um, and that's really important.
But Google and Facebook both would say they have, they know us better than our own spouses or our parents or, and they don't -- and that is consequential for us.
>> They commoditize it.
>> They commoditize it, they sell it, they use it and you know it.
If you open up Instagram and it's just really got your number with the skincare and the bags and the shoes, I don't know, is that what my Instagram is giving me?
I thought I was giving everybody that.
Um, so you know, they know, they know and they're extremely effective.
Shoshana Zuboff, in her book "The Age of Surveillance Capitalism," describes it as total behavioral manipulation that we are living.
We're moving into this era of capitalism that we were just under total behavioral manipulation.
Um.
She doesn't have, like a racial capitalism kind of critique, that is missing.
But I will say, um, we are so astute about living under systems of oppression that we should be able to outmaneuver, outsmart, work around, refuse the worst that these systems have, and figure out what's the best that can work around us.
>> First of all, I want to thank each of you for for being here, primarily our students, but we have a number of staff and faculty who are here as well.
Uh, our center for, um, Applied Data Science and Analytics, and Bill Sutherland is the director for that, uh, center.
And we've talked about our master's degree in applied data science and analytics and data.
Amy Yeboah Quarkume is the, uh, um, program director for, for the graduate program.
Certainly want to also acknowledge and thank, uh, the office of the Provost at Howard.
Office for Academic Innovation and Strategic Initiatives, and Keanna, I'll skip with this in the back.
And so please join me in giving a special round of applause to Dr. Safiya Noble.
>> This program was produced by WHUT and made possible by contributions from viewers like you.
For more information on this program or any other program, please visit our website at whut.org.
Thank you.
Support for PBS provided by:
At Howard is a local public television program presented by WHUT















