West Michigan Week
Is Big Tech Too Big?
Season 41 Episode 12 | 26m 46sVideo has Closed Captions
The internet has experienced explosive growth.
The internet has experienced explosive growth. It’s a digital marketplace for ideas, communications and products. It’s a relatively new space where regulation is slow to keep pace. We’ll discuss the societal advantages and challenges, social media mis & disinformation along with the question of free speech. Power the programs you love! Become a WGVU PBS sustaining monthly donor: wgvu.org/donate
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
West Michigan Week is a local public television program presented by WGVU
West Michigan Week
Is Big Tech Too Big?
Season 41 Episode 12 | 26m 46sVideo has Closed Captions
The internet has experienced explosive growth. It’s a digital marketplace for ideas, communications and products. It’s a relatively new space where regulation is slow to keep pace. We’ll discuss the societal advantages and challenges, social media mis & disinformation along with the question of free speech. Power the programs you love! Become a WGVU PBS sustaining monthly donor: wgvu.org/donate
Problems playing video? | Closed Captioning Feedback
How to Watch West Michigan Week
West Michigan Week is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship(dramatic music) - Is big tech too big?
The internet has experienced explosive growth.
It's a digital marketplace for ideas, communications and products, a relatively new space where regulation is slow to keep pace.
We'll discuss the societal advantages and challenges, including data collection and personal privacy, social media, mis and disinformation, along with the question of free speech on "West Michigan Week."
Thank you for joining us on "West Michigan Week."
2021 marks the 25th anniversary of the Telecommunications Act of 1996.
Since then the internet has become a delivery system for just about everything you can imagine.
The Hauenstein Center for Presidential Studies at Grand Valley State University hosting an October event titled "Does Big Tech Equal Big Trouble?"
Its guests are joining us to discuss the question.
Carl Szabo is vice president and general counsel at NetChoice, a trade association deemed Silicon Valley's most aggressive lobbying presence in Washington, DC, and Josh Hammer is counsel and policy advisor at The Internet Accountability Project, a conservative group, aiming to reign in big tech companies.
Gentlemen, thank you both for joining us.
News breaking as we record this program, former president Donald Trump says he's launching a new media company with its own social media platform nine months after being expelled from social media.
Trump says his goal in launching the Trump Media and Technology Group and its Truth social app is to create a rival to big tech companies that have shut him out.
Trump has spoken about launching his own social media site ever since he was barred from Twitter and Facebook.
Not that surprising, but what does it tell us about the current environment?
- You know, Patrick, it's a really good example of just how easy it is to jump into this market, to compete with the existing platforms, the low barriers to entry.
This is something you never saw during the quote unquote, robber baron era with oil and railroads, which cost billions of dollars to lay track or begin drilling for.
No, instead Donald Trump, the MyPillow guy, MeWe, Parler, Snapchat, TikTok, with only a couple million dollars you can create your own social media network.
And people will go there if you create good content.
Now, one of the challenges I see for former president Trump is is he going to allow Hillary Clinton to have an account on there?
Or is he gonna balk at that?
If so, he's then quote unquote, preventing her speech.
Likewise, as any social media platform grows and begins taking on advertisers and looks to get its next round of financing and evolve, people are not willing to back unmoderated content that is just like 8Chan or 4Chan, which is just the most filth of the internet.
And that was one of the challenges that Parler ran into head first.
They engaged in no content moderation, and it became a place people didn't want to go.
It also wasn't a very well-built system.
So they reinvented themselves and now they do engage in content moderation and they have a pro-free speech component.
But Donald Trump's new social media network is a great example of how competition is something that we can all create.
And as President Trump encourages his supporters to go there, they will.
And that's one of the amazing things to show just how competitive and vibrant this marketplace is.
- Josh, how do you view the announcement today?
- So great to be with you, Patrick.
So I unsurprisingly see this a little bit differently than Carl sees it.
So we now have this so-called platform called Truth.
There's obviously an alternative platform that has been closely affiliated with a former Trump campaign, high-ranking staffer named Jason Miller, a platform called Gettr.
There has also been Parler.
Parler, I'm told, nominally still exists.
I don't think I've ever sent a parlay or parley or whatever it's called from Parler.
I think I have the app.
But we now have Truth, Getter and Parler as alternative proverbial, quote unquote right of center, social media alternatives to the dominant tech platforms in this case, particularly Facebook and Twitter.
And the fact that we are now on basically kind of the third iteration of this.
There's also some website I'm barely familiar with called Tuvu.
I don't really know a whole lot about it here.
But there's all these sorts of things cropping up here.
And the sheer volume of quote unquote alternative platforms popping up, the fact that none of them seem to stick.
I mean, sure, a barrier to entry is low in the very technical sense of the fact that it's very easy to start a business, but in terms of acquiring any sort of user base, any kind of market power whatsoever here, I think it actually just, it indicates something closely approximately in the opposite of what I heard my friend and interlocutor just say, which is that the dominance of the platforms is such that it is revealing that the former president of the United States was even forced to do this in the first place here.
And look, I've been very critical of Donald Trump at times, okay?
I proudly voted for him in 2020.
I criticized him profusely after January 6, but the fact that he was nuked from what is the 21st century equivalent of the modern public square, here, I cannot emphasize that point enough here.
When we talk about Twitter, Facebook, we are talking about where people go to communicate with one another, to disseminate, promulgate and ideally, in kind of a traditional, kind of perhaps antiquated notion to engage in some sort of exchange of ideas and kind of a dialectic in the traditional Greco, Roman sense, and actually kind of ultimately arrive at something approximating the truth.
The fact that a president of the United States, a duly elected president who was kicked off that is now resorting to kind of be third or fourth iteration of an alternative social media platform to even have his voice get out there, I think is kind of ipso facto on its face problematic.
- And this is where we get into that gray area, right?
Free speech, but then regulated speech.
So what is that fine line?
Because we can't all just say whatever we want to or post whatever images and videos we want to.
So what is the regulation here?
What is the fair space where we can have dialogue on the internet and through social platforms?
What is that sweet spot?
- You know, Patrick, I don't think we're ever gonna see that one panacea social media platform.
I mean, Donald Trump, I don't think has ever had a problem getting his message out there.
He could hold a press conference in his backyard at Mara Lago, and I promise you every major media outlet will be there.
So him being kicked off Twitter, I really don't think impeded his ability to get his voice out there.
But we're never gonna have that one centralized social media platform, because there's so many different choices out there.
I like to think of social media platforms as kind of like restaurants.
If I don't want to go to Buca di Beppo's I will go to PF Chang's.
If I don't want to go to PF Chang's, I'll go to Wolfgang Puck's.
If you can tell, I love chain restaurants.
And there's a variety of choice out there.
So you see TikTok, which is the most downloaded app on the planet competing directly with Facebook.
Now Facebook sees competition, they're trying to integrate some of the features of TikTok.
YouTube's trying to do something similar.
If you want short videos and clips, you can use Snapchat, which is one of the most popular apps amongst teens and young college students.
So what you're seeing is an incredible amount of choice out there.
And what people will do is they will naturally gravitate to the services that they want to end up on.
People often think of Facebook as being the alpha and omega of social media platforms.
And I think it is neither.
It certainly can't be the alpha because that was Myspace.
And before Myspace, we had Friendster.
And it can't be the omega, because right now Facebook is losing users at a precipitous rate and not gaining new ones.
That's why you've started to see the board look for new ways to innovate and expand their brand.
So that's the wonderful thing that exists today, is there's a ton of choice out there, and people will go to the systems that they want.
Because at the end of the day, it doesn't matter if I go to Facebook once a month, what they care about is how much time am I spending on Facebook right now?
And because I'm talking to you, that's time I'm not spending on Facebook.
And it's, in essence, a competition for attention.
And so the wonderful thing is your viewers right now are taking time away from Facebook.
They're taking time away from Netflix.
They're taking time away from the myriad of other services that they could use to compete.
And Facebook doesn't get paid unless it has eyeballs active.
So it's a huge competition for time.
And that's the marketplace in which we live, which is a robust one and its a competitive one and it's made for users and it gives us lots of choices.
- I understand the choice argument, but you know, I could also say you can't yell fire in a theater, right?
I mean, so again, I'm trying to get back to how do you regulate without taking away certain freedoms that we are all guaranteed?
Like what are the boundaries that these places should operate?
What's fair?
- And we actually know what that is.
It's called the community standards that we all agree to when we sign up for these services.
And different services are gonna have different community standards.
There's a subreddit, so Reddit's one of the more popular social media platforms, that is all about pictures of dogs standing on their hind legs.
And when you agree to post an image on that subreddit, it will only be dogs on their hind legs.
And you post a picture of dogs on all fours or, God forbid, a cat, your post will be removed.
So what we end up seeing are different attempts at content moderation.
And they are a lot of things that most of us would agree to.
We would say no terrorist content.
But who's gonna make that decision?
Do we want the government to be making that decision?
Probably not, because terrorism is in the eyes of the beholder.
George Washington was a terrorist at one point to the British government.
Do we want the private business to make that decision?
We can.
And because there's so much choice out there, users can vote with their feet.
If there's too much offensive content, we'll just stop using the platform.
We don't have to unsubscribe.
We don't have to delete our account.
We just won't go there.
So is there one rule for everything?
Probably not.
But what we are given is the ability for each platform to decide what's best for their users and their customers.
Just like every restaurant decides what's best to put on their own menus.
- But Josh, I'm guessing that you're probably thinking that, "Well, wait a minute."
And this gets back to the question, which is, is big tech too big?
And that is that the gatekeeper still has the control, right?
So sometimes can it overstep?
Is that some of the concern that you have the power of a few can dictate the bigger picture?
- So when we're talking content moderation decisions here, we're mostly talking about section 230, you know, which of course, it's 47 US code section 230.
It was part of the Communications Decency Act that was a Clinton era statute that was passed in 1996 here.
And specifically what we're usually talking about when we're talking about content moderation for the big tech companies and the varying levels of legal immunity to which the companies should or should not be held culpable, we're talking about these so-called good Samaritan provision of section 230, subsection C2.
So I think just, it will be clarifying just for the viewers just to hear the language here.
So I'm actually just gonna read it if that's okay.
Subsection C2 of section 230 reads, quote, "No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith."
That's the keywords there.
"In good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violence, harassing," and here's the other key language "or otherwise objectionable, whether or not such material is constitutionally protected."
So the good faith and otherwise objectionable language are kind of the key provisions there.
Now the courts over the years have interpreted that to provide huge swaths of leeway to tech companies to effectively take down content as they so choose.
The courts have traditionally not engaged in kind of a subjective kind of analysis, similar to kind of like a fourth amendment reasonableness inquiry as to whether or not the action was actually taken in good faith.
They have not kind of tried to discern what quote unquote, "otherwise objectionable" really means.
We're starting to see a little bit more of that actually, but traditionally the courts have generally shied away from that.
Justice Clarence Thomas, the most conservative member of the Supreme Court, has now weighed in multiple times in this kind of status quo of subsection C2 interpretation is problematic.
He most recently did so in an April, 2021 decision called Biden versus Knight First Amendment Institute that gained a lot of traction in conservative circles here.
And I guess my proposal or my series of proposals would entail multiple things.
One of which is I agree with Justice Thomas.
I think that this provision has been erroneously interpreted here.
In fact, actually there's, there was a kind of a recent law review article from Eugene Volokh of UCLA School of Law and Adam Candeub from Michigan State University, right here in Michigan, that I think makes a very, very persuasive case.
And I don't want to get too much into legal weeds here, but they basically say that if you look at the enumerated criteria that are there in subsection C2, lewd, lascivious, filthy, ultimately what this was really about, it was really about kind of providing leeway to take down like sexually pornographic images.
A lot of this was actually motivated by trying to clean up the internet in kind of like a pornographic sex trafficking sense here.
So what these law professors argue is that the otherwise objectionable language there should be narrowly construed to not just give blanket immunity and leeway, but should actually more narrowly be applied to be similar to the enumerated criteria, the lewd, lascivious, et cetera.
What that means in less lawyerly language, is that the tech companies actually, under a proper construction of the statute would have less leeway to actually censor without legal immunity, the way that they'd been doing.
But on top of that, because I don't want to rely solely on the courts for this, I do support legislation that would actually modify the so-called good Samaritan provision as well, which would kind of replace this vague and subjective, otherwise objectionable language with some kind of concrete terms.
Senator Marco Rubio had a bill from earlier this year, I believe it was in June, where he said he suggested removing otherwise objectionable and replacing it with terms such as promoting terrorism or just simply unlawful.
Therefore kind of basically saying that if the speech is lawful, you know, the fire in a theater thing would be a good example of what is not lawful speech, but if the speech is lawful you basically have to platform it.
So those are kind of some of the reforms that I would suggest there.
It's also worth just briefly noting here that part of section 230 is actually arguably unconstitutional on its face.
Because the final part of that provision talks about quote, "whether or not such material is constitutionally protected."
What the statute's actually doing here is they're delegating to private actors, the leeway to purportedly censor that which the government cannot sensor itself.
There's a long line of cases, including a case called Norwood vs Harrison from the 1970s where the court Supreme Court has said over and over again, that the government cannot do that.
You can not immunize private actors to do that which the government itself cannot do.
So it's constitutionally problematic.
It has been judicially misinterpreted, and it should be statutorily reformed on top of that, I would say.
- I'm gonna consult the honorable George Carlin.
(laughs) I mean, there's a lot here and there's always the language issue and everything else that's taking place.
I want to stick with this line of thinking here, and we get into the free speech, but where are we when it comes to our data privacy?
That seems to be a big issue, but also misinformation and disinformation.
Maybe that's the better segue here.
And that is when we get to free speech, but then there's misinformation and disinformation.
How do you get your arms around that?
- So one of the things that the list that we just heard doesn't address are a lot of the things you just mentioned, Patrick.
It doesn't mention things like child grooming, where adults try to cultivate sexual relationships with children.
Until they act, that's protected by the first amendment.
It's not on that list.
Terrorist speech, terrorist recruitment, not on that list, and it is protected by the first amendment.
There's a lot of lawful, but awful content that exists on the internet that we probably don't want to see.
And that's what we're ultimately getting to.
The thing that you bring up, Patrick, is that when it comes to misinformation or disinformation, the challenges that social media platforms are having is we're stuck like the child having to pick between two parents.
We have Democrats saying, you must remove all the content that we don't like, or that we consider to be harassing or bullying.
Bullying also not on the list that Josh just laid out.
And then you have conservatives saying, "You need to allow more content to live up there."
And stuck in the middle are social media platforms, where we have politicians trying to force us to pick which parent we want to go with.
And that's really bad, because that's government intrusion upon the decisions of private businesses.
We want to allow these platforms to decide what is best for themselves, what is best for their users and their customers.
There is an event that came up called the Tide pod challenge.
Now, this is famously where a bunch of kids are eating Tide pods, and Tide pods are detergent and getting really sick.
Well, that's constitutionally protected speech saying, "Ooh, yum, tide pods."
It is neither lewd nor lascivious.
Shouldn't platforms be allowed to remove that if they want to get rid of it?
And how do we write a law to address it?
That's where I ended up landing is what is the solution?
Is the solution for the government to try to prescribe laws dictating what speech can and cannot be allowed on social media platforms?
Which for me, as a conservative, violates the first amendment notions that are intrinsic to what I believe?
Or should we allow the platforms to make just in time decisions?
So one of the things that came up recently and always gets complained about is Parler coming down after January 6th.
January 6th was a instrumental day in our nation's history and platforms felt like they had to react.
They don't want to be held responsible.
They don't want their users or their supporters or their advertisers in particular abandoning them because they're seen to be in agreement or contributing to these efforts.
So platforms have a really tough job to figure out how to appease all these multiple interests, whether it's politicians, conservatives, liberals, terrorists, who want to post all the time, whether it is people posting on political issues or people posting on cat videos.
And at the same time appeasing all of their advertisers.
And there's no way that we can expect Congress certainly to write a law to address that.
So the best tool is to give it to the individual.
Give it to the businesses, give it to the content moderators, which are sometimes people at the top level or in a Reddit or NextDoor type situation, they're people in your own neighborhoods making these content moderation decisions.
And that's the best way to keep this ball moving forward and keep the internet civil and the way that we've come to expect it.
- It sounds like we talk about just in time decisions.
Maybe we don't see those in the public, because it seems like there are a lot of after the fact decisions.
It seems like there's so much content to police in the first place.
I don't know how any of these content platforms can really wrap their arms around everything.
- I mean, that was part of the genesis and the impetus behind section 230.
So YouTube, let's let's pick on YouTube for a minute.
YouTube has a hundred hours of video uploaded every minute.
A hundred hours uploaded every minute.
So that's 6,000 minutes every minute.
You would either need a exponentially growing team of content moderators starting at 6,000 and then growing from there, adding 6,000 every minute to review that much content, or a system of automation or a system of the two.
What we don't see are the content that gets removed and never even gets near our inboxes.
So we did a transparency report a couple of years ago.
In just six months online platforms took down more than 5 billion posts and accounts for things like spam.
Spam is a good example.
One person's spam is another person's advertisement.
They took down things for child sexual abuse material, which we all agree is prohibitive and is not protected by section 230 anyway.
They engage in a ton of content moderation to try and keep on top of that.
And sometimes when they do their content moderation, we assume it's about us.
So another example is Amazon was famous for removing a Clarence Thomas short video from Amazon Prime.
And I contacted my friends over at Amazon and said, what the heck?
Why did you remove this video?
Well, it turns out it had nothing to do with Clarence Thomas, had nothing to do with conservativism.
It had everything to do with nobody was watching Amazon Prime, short form videos, and they just obliterated the whole category.
So they took down about 10,000 videos simultaneously, one of which happened to be about Clarence Thomas and some people grabbed their pearls and claimed to be the victims of a conservative conspiracy when it was just nobody's watching the videos anyways.
So that's a lot of the discussion that's going on is sometimes people think that they're being attacked when in reality, a cigar is just a cigar.
- We have about four minutes here and Josh, I'm gonna have you jump in.
Data privacy, monetizing data.
How has this been shaping up over the years?
I know that that's something that everybody has concerns over.
What is the approach that you're seeing?
- So I want to spend some time responding to some of what Carl had to say, because I think - Four minutes.
- that it's important.
- We've got four minutes here.
- Okay.
All right.
Sounds good, Patrick.
I appreciate the heads up.
- That's all right.
- So this notion that we want the social media platforms, Facebook, Twitter, and so forth to be engaging in more quote unquote, good faith content moderation, because any way to kind of restrict section 230, would therefore kind of reduce their ability to do that.
They're not doing that currently.
I mean, under like the existing lay of the land, as far as like courts all across the country have interpreted subsection C2, they have large swaths of leeway to take down that stuff.
And as anyone who has ever spent any time whatsoever on Twitter or Facebook, will tell you they're not doing so.
I was literally there in Orlando, Florida, or in Tampa, Florida, excuse me, In July.
I was about to do a panel with Charlie Kirk and my good friend, Sohrab Ahmari when Charlie Kirk banned Brandy Love, you know, the quote unquote conservative porn star from being at that Turning Point USA conference.
It turned out that Brandy Love's Twitter account, her pinned tweet was her engaging in sexually explicit pornographic material.
It was literally at the top of her Twitter feed.
So Twitter is not exactly upholding its end of the bargain to kind of take just one example there.
I mean obviously, The Ayatollah of Iran, Hamas, they are all over there.
So, you know, I was reading kind of the proposed language from Senator Rubio.
To be clear, I don't work for Rubio.
I have nothing to do with Marco Rubio, it was just like one example of the statute.
And there he talking about kind of replacing it with concrete terms, such as quote, promoting terrorism or content that is quote unlawful.
Well, the remedy there then is to make more speech like the grooming of child brides, sexual trafficking, is to make more speech unlawful.
The irony of course, is that I'm actually not a free speech absolutist at all.
Anyone who was kind of read my ancillary work in the first amendment or constitutional theory, could tell you that I actually take a more restricted view of what kind of speech should be legal.
I believe in a robust, common law of defamation and things like that.
But the way to do that, then, is to kind of a use the positive law to make more speech unlawful.
So all that to say your question Patrick is about data privacy, which to be fully candid... - We have one minute.
- Okay.
- Quickly.
- Well look, I I'll use the remaining 55 seconds or whatever we're on here then to kind of talk about kind of the ancillary topic to data privacy, which is kind of just the concrete harms that we have seen with these recent kind of Wall Street Journal reports about Facebook and Instagram and the way that the company seems to be kind of burying internal evidence that its products are addictive to teenage girls and all of this.
And look, these companies, I hear Carl like over and over again, say that it's kind of like a conservative principle that is handed down from Mount Sinai to kind of let these corporate entities do whatever the heck they want to do.
I mean, I don't know what kind of tablet from Sinai he's reading.
That is not my understanding of conservativism.
When these companies are actively ignoring that kind of internal evidence, that their products are cigarette like addiction tools for teenage girls, it's time for the government to get involved here.
There is no excuse for the government to not play any role whatsoever.
There's nothing particularly conservative about letting destructive entities to the common good of the country just run their lot and do whatever they want.
- All right, we're gonna have to wrap it up there.
Maybe next time you come to town, we'll talk about data privacy.
Josh Hammer, Thank you so much.
Carl Szabo, we appreciate all the input and maybe it'll open some eyes for our viewers.
Thank you both for being here.
- Thank you.
- Thank you.
- And thank you for joining us.
We'll see you again soon.
- (dramatic music)


- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.












Support for PBS provided by:
West Michigan Week is a local public television program presented by WGVU
