
Story in the Public Square 2/7/2021
Season 9 Episode 5 | 27m 39sVideo has Closed Captions
Jim Ludes & G. Wayne Miller interview New York Times technology journalist, Kashmir Hill.
Hosts Jim Ludes and G. Wayne Miller interview Kashmir Hill, a technology journalist for the New York Times. Known for writing about the unexpected and sometimes ominous ways technology has changed our lives and compromised our privacy, Hill discusses federal regulations on tech companies and the data collected from users.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Story in the Public Square is a local public television program presented by Ocean State Media

Story in the Public Square 2/7/2021
Season 9 Episode 5 | 27m 39sVideo has Closed Captions
Hosts Jim Ludes and G. Wayne Miller interview Kashmir Hill, a technology journalist for the New York Times. Known for writing about the unexpected and sometimes ominous ways technology has changed our lives and compromised our privacy, Hill discusses federal regulations on tech companies and the data collected from users.
Problems playing video? | Closed Captioning Feedback
How to Watch Story in the Public Square
Story in the Public Square is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- We live in an age increasingly defined by the intrusion of technology in our lives.
Today's guest is a technology journalist whose work explores the lobbied tech dystopia and how we could avoid it.
She's Kashmir Hill this week on Story in the Public Square.
(bright upbeat music) Hello and welcome to Story in the Public Square where storytelling meets public affairs.
I'm Jim Ludes from The Pell Center at Salve Regina University.
Joining me, as he does every week, is my great friend, G. Wayne Miller of The Providence Journal.
Each week we talk about big issues with great guests, authors, journalists, artists and more to make sense of the big stories shaping public life in the United States today.
This week, we're joined by Kashmir Hill, a technology journalist for the New York Times.
Kashmir, thank you so much for being with us.
- [Kashmir] Thank you for having me.
- So we wanna talk about some of your recent reporting, but I'm kind of interested how you got interested in these issues of technology and privacy.
- Well, part of it was that I came to journalism later in life as kind of a second career for me.
And so as I was starting the practice, I was just kind of struck by how much we invade the privacy of the people that we're writing about, the way that we dictate the reputation.
And so just early on, I was thinking a lot about privacy just being the kind of journalist I was too.
I was a blogger at a kind of a legal gossip site called Above the Law.
And I was doing journalism school at NYU and I was supposed to choose a theme.
And so I ended up pursuing something I called The Not-So Private Parts.
And it was about privacy in the law, but I just kept ending up writing about technology, writing about how Facebook was changing our lives, writing about online cookies and how they followed us around the web.
And I kind of backed into becoming a technology journalist.
- Well, so, I think most Americans at this point have at least a passing knowledge of the fact that these big tech firms track so much of our personal and private information.
But for the five people who are uninitiated out there (Wayne laughing) what are we really talking about?
- I mean, we're talking about, it's an industry that really predates the internet.
Agencies have been doing this for a long time.
It's just groups, companies, the government that wants to track our behavior, learn about us and then figure out how to better sell us things, or figure out when we do something wrong.
But it has just gotten so much more detailed and sophisticated in the internet age because we are doing behavior that can be tracked and captured in a very minute way.
Your browser can see all of the websites that you visit.
Your smartphone is collecting your location everywhere you go.
It's just all of these little ways that we're kind of giving up potentially sensitive information about us and information that's used to make judgments about us.
- So what are some of the other devices that are tracking what we do and taking our data?
You mentioned a few in them, bu we live in a highly tech world, certainly in the United States, many of us, we all have a lot of devices.
Give us sort of a laundry list of of devices that (chuckling) we can do the whole show just for the (indistinct).
(all laughing) - I might take the whole half hour, but basically, I mean anything with a camera, a microphone, or computer chip could be collecting information about you.
So NBC News just had a great report about rental cars and cars in general are collecting information about you.
Oftentimes they know how much you weigh.
If it's connected to the internet, your toothbrush, your coffee maker, certainly your phone, your computer, if you have a smart speaker in your home, like an Echo.
I mean all of these devices are designed to gather data about us, essentially to kind of give us better services.
But yeah, many, many things that we interact with in our day-to-day lives, especially in the pandemic where so much of what we're doing is virtual, information is being collected about you.
- And so is there a caution?
So, yes, they're collecting this data.
What are they doing with it?
And is that something we need to be concerned about?
- It's very complicated because there are many benefits that come from this data being collected.
Netflix looking at every single show you watch over time, they make better and better recommendations to you.
But there can be concerns.
This kind of data is definitely used to implicate people in crimes nowadays.
The location data on your phone, location data collected by email provider, internet service provider, that information can be used to sell you better ads, but it can also be used to figure out where you were at a certain date and time by the authorities.
What I tend to cover is where the invasion of privacy seems very sensitive and intrusive.
So one of the more R-rated examples of information collection I found very disturbing was a sex toy that was Bluetooth connected that you could communicate with somebody long distance and they could control it from afar.
And it was collecting all kinds of information about how it was used, how long it was used for, what the settings were, how hot the toy got.
And so I contacted the manufacturer of the device.
They were in Canada and I said, why are you collecting the sensitive information?
And they said, "Well, it's great for market research."
But I don't think most of their users realize that one of the most intimate human experiences you can have was being data mined.
So it's a whole spectrum.
Sometimes we're happy to give up that data.
And sometimes we are not aware that it's being collected, or we don't know how it could be used to harm us in the long run.
- Do you think the average person understands the depth of of data that is collected and the many places it's collected from?
I mean, just the average people, particularly now in the pandemic, people get up in the morning, they check websites, they go on Facebook, they may be texting their friends back and forth because we're isolated and we want to stay connected.
But do you think people have a full comprehension of exactly what these companies that offer these platforms are doing with this information?
- No, and I think it's honestly impossible to wrap your head around because it's being done by machines.
It is so easy to gather the data, and people will often discover that still, you'll see these stories go viral.
It says, go to your iPhone and look at your most frequent locations if you go deep into the settings and it'll show where you spend your time and it shocks people.
Or we're going to download your data from Google, and you'll discover that they've been tracking your plane flights for the last decade and it's all there.
I just don't think it's possible to kind of function as a human being and have this constant awareness of the data that's being collected about you, which is why over time a lot of critics have said that we need privacy laws to protect us, because as an individual, it's too hard to monitor and it's too hard to prevent that data from flowing out.
- So let's say you have a sudden realization this is all being collected and you wanna stop it.
We can get into steps you can take, but if you reach that point, that data that's already been collected is already collected.
It's not like Google's gonna throw it away, or you can write to Mark Zuckerberg and say, please delete whatever, right?
I mean, that's archived, that's on a server somewhere.
I'm I correct on that?
- Yeah, and it's just so hard.
I'm sure we're going to get into it, but it's so hard to predict the way that your information could ultimately be used.
And that's one of the things that's so hard about privacy.
It's come up, I've been covering face recognition a lot over the last year and a half, particularly a company called Clearview AI that went and scraped billions of photos from the public web.
So photos that people had picked posted to Instagram, and (indistinct), and their employer site, just kind of all over the place.
And the company was able to get biometrics from that, like get people's face prints and then create a facial recognition app that can identify almost anyone with a public photo on the web if it's among those billions that they collected.
And I just don't think that people when they were posting their pictures online had any idea that one day, this means that they may not be able to be anonymous in public anymore.
- There were no laws that prevent that company from doing that?
Am I correct on that?
- There are some laws, very few, there's no federal law in the United States.
There is interestingly a law in Illinois from 2008 the Biometric Information Privacy Act, or BIPA, that says you're not supposed to use somebody's face geometry without their consent.
A lot of people didn't know about this law for a long time.
It was on the books.
It costed Facebook $650 million in 2019 because it had been the automated photo tagging with face recognition without many people's consent first.
But it is one of the few laws that kind of, is supposed to prevent what Clearview AI is doing.
Clearview AI didn't seem aware of it.
And after I did this kind of expose on the company, they pulled out of Illinois and tried to erase all the photos of people from Illinois from its database, but they are fighting a lawsuit of the law now.
And they're saying, no it's an unconstitutional law that it violates their business's First Amendment rights.
So will see what happens.
But, yeah, for the most part, there's not really laws to protect us from that.
- On that point, I'm curious your thoughts about where is Congress on this issue?
Is it simply do we just discount it as the product of the well-paid lobbyists who work for big tech, or is Congress just missing the boat on this?
- And tech companies have (indistinct) trying to prevent regulation of the collection of data and the analysis of data.
I mean, privacy is so complicated and it's such a spectrum.
Some people are very comfortable with all of this data being collected and used and some people aren't.
And so I think it is a hard issue to legislate or regulate 'cause there are so many gray zones.
And it is complicated.
I mean, if there is a photo on the web of you, should it be illegal for a company to kind of analyze that photo and get your face print?
It's arguably public information, but it's just that we kind of embed sensitive information in ways that we don't expect.
And the technology to analyze information we've put out there is getting better and better.
So the way it can be used can be more harmful.
- So we'll get into facial recognition and how law enforcement has used or abused it in a moment, but before we do I wanna hit one more sort of home device.
And that's the smart speaker, Amazon Echo, there's Google Home, there's Apple HomePod.
I know in this house, we refuse to get those things because we're paranoid and maybe justifiably so, but talk about those devices they're listening to, right?
And recording, am I correct in that?
- So there's a distinction to make, those devices are listening all the time, but they're not recording all the time.
They are only-- - How do we know that?
(both chuckling) - I've actually worked with somebody, researchers who tested this to see kind of how often the devices got-- - Okay.
- Activated.
- They had to watch hours and hours of television to see kind of what other things woke it up.
And they do sometimes get turned on unexpectedly and there have been problems.
There was a couple who were talking about something and the device all of a sudden started (indistinct) their conversation and then somehow sent it to somebody else.
There are weird things that happen like that, but yeah, so you have that in a home, you're talking to it all the time and these companies are collecting that data over time.
It's not listening to everything that's happening, but it does have those distinct messages.
And so this is one of those products that I wonder 20 years from now how that information will get used.
We do have an Alexa in our house.
We have an Amazon Echo.
And my daughter at around two years old, started being able to talk to it where she could understand that.
So she can order songs now and ask it questions, and ask for knock-knock jokes and like fart noises.
(all laughing) - [Wayne] That's a true kid.
- I also wonder if we keep it in our home until she's 18, Amazon will have this kind of daily interaction that this girl and then woman is having with this device, what kind of profile can you build from that?
How would they potentially use it?
And it's hard to predict right now.
And we find it convenient to be able to play songs and set timers with just our voice.
But I don't know, am I going to regret that in 20 years?
As a privacy writer, I suspect that I will.
- And I wonder if we can, again, for the five folks who are uninitiated, what are these profiles that these companies build with all of the data that they gather about us?
What do they do with that data?
What do they do with those profiles?
- So it depends on the company.
So Netflix, for example, if you're a Netflix watcher and you take certain kinds of movies, so it's looking over time, what your preferences are.
And it starting to say, okay this person likes romantic comedy.
This person likes Spanish movies with action.
And so that's kind of a benign profile that might get built over time.
There's other kind of more disturbing profiles that get built based on websites that you go to.
There's certain kind of ad targeting companies that are looking for who maybe is an obsessive gambler, who has a medical condition that we might sell to somebody who finds that very valuable, because they're doing clinical trials.
And you're always trying to find people to participate in those.
Google has a place where you can go and see what their profile of you is like how old they think you are, what your interests are.
Much of the profiling industry has been built for the purpose of advertising.
But then it gets used in kind of more disturbing ways.
I think the big privacy scandal of the last few years was certainly Cambridge Analytica, I think it's a bit overblown to be honest, but what they were doing was trying to get information out of Facebook in order to create voter profiles to see what persuaded people, what arguments work best on certain people.
And because we now have the ability to target individuals if you kind of understand their profile and how they think it could become very insidious over time.
- So in late December, for the Times, you wrote a story about a black man in New Jersey who was wrongfully arrested because of facial recognition.
And he wasn't the first.
Tell us about that one case and the larger issue there.
- So the case it's actually happened in 2019.
There was an incident at a Hampton Inn in New Jersey.
A man came in to return a rental car to a Hertz that was in the lobby and he stole some candy or some snacks.
And the police were called.
And when they got there, he said, "Oh, I'm sorry, I'll pay for them.
"Here's my driver's license."
It was a Tennessee driver's license.
When the cops run it, it came back as fraudulent and they decide to arrest him.
They saw some marijuana in his pocket and he fled, and he like drove off kind of crazily, he hit a column on the hotel.
He almost killed an officer.
And so they decided they wanted to track him down.
And they took the Tennessee driver's license, a fake license, it was a real photo of him.
And they decided to run it through facial recognition.
And so they got a couple of investigators with access to facial recognition technology to run the photo.
And it came back as a match for a man named Nijeer Parks who live 30 miles away in Paterson, New Jersey.
Parks at the time was in Paterson, New Jersey at a Western Union sending money to his fiance.
But the software told them that he was the guy.
A facial recognition matches only supposed to be like a clue in an investigation and not probable cause.
But oftentimes, I've seen at least three cases where that's not the case and it's given more weight than it should be.
So they arrested him.
He ended up spending over a week in jail because he had some prior convictions and it turned out that he's not the guy.
It was just a bad facial recognition match.
And it's problematic for many reasons that you would be arrested and have to deal with law enforcement for over a year.
His case went through the system for a year.
He almost decided to take a plea, just 'cause he didn't wanna deal with it anymore.
He was afraid of what happened when it went to trial.
And this is something that we have been warned about for years, because that facial recognition algorithms don't work as well on black and Asian faces.
There was a national study in 2019 of over a 100 different facial recognition algorithms that are out there by companies, and it found that they don't work as well.
I mean, they're not perfect, but they're being used in a way where we do think that they're perfect.
And every time I talk to law enforcement they tell me, "We'd never arrest somebody "based on a face recognition match alone.
"We'd make sure there was other evidence "connecting them to the crime."
But now I've seen three different cases just this year, all black men who were arrested for crimes they did not commit based on a bad face recognition match.
So, it's very concerning.
- Another story that you wrote last year called Designed to Deceive, which is about computer generated images.
So instead of facial recognition, this is using computers to generate faces.
In my spare time, I research disinformation, and there was a story also last year about an internet research agency created new source called Peace Data.
And the profile photos of its editors were AI generated images of people that didn't exist.
Talk to us about that technology and the threat that I guess it poses to our sense of reality.
- Yeah, so it's two different things.
It's face recognition and then kind of face creation, but they're both powered by a similar things in that artificial intelligence machine learning is getting so much better 'cause we have more data to put into the system.
And so this face generation project that we did, my colleague did all the technical work, but there's a new kind of, it's called, I don't wanna get like too technical, but generative adversarial network.
And it's just kind of a machine learning where it takes a bunch of data about faces.
It studies them and it starts making its own faces.
And there's something else that's trying to detect what's not a real face and they're kind of going back and forth and-- - So it's like an arms race.
- It's an arms race.
And it's gotten so good at making these faces that look real.
And so we did this demonstration of it and you could kind of play with the faces to get (indistinct) and they're made, but they do look like real people and they have started, as you said, they've started to pop up online.
There's still little ways that you can detect that the computer got it wrong.
Sometimes like mismatched earrings, like a strange part of the picture that doesn't look right, but the technology will get better and better.
And I think eventually you'll have computers that can dream up more than just a profile picture, but pictures that can make a person seem real, like a whole Facebook profile.
And you combine that with other kinds of impersonation technology that's been developed.
There's fake voices now where you can impersonate somebody if you have enough audio of them speaking.
It could be very stabilizing, in terms of just not knowing what's real online.
I always think about it when I'm shopping and you look to reviews to figure out whether something you're gonna buy is good or not.
It's just so easy now to not just create a fake person, but create a fake review, create hundreds, thousands of fake reviews, because computers have just gotten good at this.
And so you get to this point where you don't know what to believe online.
- Have you taken a look at deep fake videos which is the moving image corollary of what you were just talking about?
Have you looked at that or do you have thoughts on that?
I mean, obviously you're well aware of that, also developing is an endeavor enterprise, per say, whatever the word is.
- Yeah, I mean, it seems to me that that's still early stage enough that it's detectable, but it will get better and better.
And it is, we just, we do, there are motivations to create these kinds of fake news, fake information, disinformation, misinformation, and we already know this is gonna be abuse.
And so it's a question of what do we do to detect this?
And a lot of this social media company is already working on this.
So there's now a new company that sells fake people for like.
(all laughing) - [Wayne] How much is it?
I just wanna know.
(chuckling) - It was about $3 for a dozen people and more, if you want more.
- Oh that's cheap.
- One of their customers was a big social network that wants to buy them in order to train it's algorithms to detect when somebody is fake.
So to detect fake profiles and kick them off.
So it's certainly something that tech companies are thinking about and realizing that they're going to have to protect their users.
But as we've seen over the last couple of years, it's very difficult to get rid of wise misinformation, disinformation from the internet.
So I don't know how it's gonna go.
- When you joined the New York Times they introduced you as someone who would be writing about the quote unquote looming tech dystopia.
So two questions, is it looming are we there?
(all laughing) - I mean, I both love and hate my technology.
I think that we're living in both at the same time and I do appreciate the fact that with my smartphone, I can drop in anywhere in the world and kind of immediately figure out how to go get a car, where to live, where to eat, who of my friends are there.
At the same time, I hate all the time I spend staring at my phone not knowing what kind of data is kind of siphoning out to third parties.
It's gonna be used in some weird way.
So I do think we're kind of we're there and we're not there.
It can always get worse, but I certainly see a lot of troubling things that happen.
And I try to kind of be the canary in the coal mine.
I do a lot of this first person reporting where I try to put myself a little bit further into the future and report back on how it was, so yeah.
- In fact, last year you did another story where you tried to live without the big tech companies.
How did that go?
- Yeah, there's actually, a couple of years ago, I did this when I was a reporter at Gizmodo, and I kind of revisit it at the New York Times, but yeah, I was thinking a lot about how do you get away from the technology?
How do you prevent these companies from getting your data?
And so I worked with a technologist where we created this virtual private network, a cloud that kept me from being able to send information to the tech giant servers or get information from them.
And I went through them one by one, Google, Amazon, Facebook, Microsoft, and Apple, which is very hard 'cause all my devices are Apple, but (Jim chuckling) one, it was impossible, particularly with Amazon.
And that was because when you think of Amazon you think of amazon.com, maybe you think of profits, but they also run Amazon Web Services, which is the infrastructure for much of the internet.
So much of what I wanted to use just went down when I was blocking Amazon.
Google to Google was everywhere on the web, on every single website they would load so slowly 'cause websites would try to load assets from Google before their own content.
So Google Analytics, Google Ads, Dropbox had a Google program they use to detect if you're human or not.
So because I wasn't using Google, look to them like I was a bot.
It was incredibly difficult.
I just couldn't use so much of the technology that I've come to depend on.
And that series was interesting.
A lot of people who looked at that series, there's so much talk about antitrust right now.
So people were saying "Look at this, "she like can't live without these companies.
"She doesn't have other options."
And other people said, "Look, she can play with these companies.
"We need to like leave them alone.
"They offer such amazing services.
"Life is hell without them."
So it was really interesting to see the two different reactions to the series.
- Kashmir we've got to leave it there.
Your work is remarkable.
Folks should look for you the New York Times That is all the time we have this week, but if you wanna know more about Story in the Public Square and is on Facebook and Twitter.
For Wayne, I'm Jim asking you to join us again, next time for more Story in the Public Square.
(bright upbeat music)
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
Story in the Public Square is a local public television program presented by Ocean State Media