The Wheelhouse
Big Tech’s big influence on the Trump Administration, plus AI regulation
Episode 17 | 52m 4sVideo has Closed Captions
We examine the intersection of big tech and politics.
We examine the intersection of big tech and politics. Lawmakers in Connecticut and beyond are attempting to regulate the harmful impacts of artificial intelligence without stymieing innovation. Meanwhile, billionaire CEOs were front and center at the inauguration of President Donald Trump. Will this relationship impact the government’s ability to keep big tech’s power in check?
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
The Wheelhouse is a local public television program presented by CPTV
The Wheelhouse
Big Tech’s big influence on the Trump Administration, plus AI regulation
Episode 17 | 52m 4sVideo has Closed Captions
We examine the intersection of big tech and politics. Lawmakers in Connecticut and beyond are attempting to regulate the harmful impacts of artificial intelligence without stymieing innovation. Meanwhile, billionaire CEOs were front and center at the inauguration of President Donald Trump. Will this relationship impact the government’s ability to keep big tech’s power in check?
Problems playing video? | Closed Captioning Feedback
How to Watch The Wheelhouse
The Wheelhouse is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipThis week on the Wheelhouse issue.
Big tech and politics from I.
To tech billionaires and big.
For Connecticut Public.
I'm Frankie Graziano.
This is the Wheelhouse.
The show that connects politics to the people.
We have a weekly dose of politics in Connecticut and beyond, right here.
Tech bros, technocrats, and of course, bro arcs.
These are some of the words used to describe the male CEOs behind billion dollar companies like Meta and Tesla.
In the past, some of these companies have tried to distance themselves from President Donald Trump.
Not so fast.
Not anymore.
They're throwing their support behind him.
Last week, people like Mark Zuckerberg and Elon Musk had front row seats at Trump's inauguration.
So why are they pivoting to politics and what that might that mean for you and me?
We'll get some analysis on those questions and more.
Later we'll check out how local lawmakers are working to regulate.
I, our first guest when examining the recent events on Capitol Hill, marrying tech billionaires to politicians, use the term roller parks in her story.
She is Sigal, Samuel, and she's a reporter at VOX.
Good morning.
Morning.
Nice to be with you.
Thank you so much for coming on the show.
I first saw the word Broligarch in that story.
At Vox.com.
It's called the oligarchs have a vision for the new Trump term.
It's darker than you think.
Why is this phrase, why is Broligarch something that is entering our lexicon?
And who are these people?
Yeah, so it's the intersection of bro and oligarch.
You've heard a different people warning that we're seeing an oligarchy in this country.
And the people sort of helming this are the tech bros.
So you've got people like Elon Musk, Mark Zuckerberg, Jeff Bezos, but also, perhaps less household names, Peter Thiel, Marc Andreessen, and the bro part, I will say is not incidental here.
Arguably.
And we might get into this in a little bit.
There is sort of ideology underlying this of, white supremacy and white supremacy.
So the this sort of aggression, which is coded as male is part of the point here.
Yep.
We're not calling it a system market or anything like that.
There's, totally, white male energy, here, in this, part of the conversation here.
Why do you think these men were so close to President Trump during his inauguration?
We talked about earlier in the show, front and center, on such an important day for the president.
That's right.
I think there's a dominant narrative in the media that says, look, it's very simple.
It's about protecting their own selfish business interests.
You know, Trump has promised massive tax cuts for billionaires.
So these these brothers just want tax cuts.
They want friendlier regulations or fewer regulations.
I think it's, more complex than that.
I think that the MAGA ideology gels really, really well with these Tech bros ideology, which is, you could call it an ideology of impunity.
It's sort of a, an approach that says we, the rich and powerful, don't just want more money.
We want to have absolutely unchecked power for the powerful.
We want no constraints on us.
Not from government, not from wokeness, not from the tyranny of facts.
Not from anything.
In the final months of President Trump's campaign.
I think it was Elon Musk who contributed more than $100 million to the effort.
I think it was in the final few months there, according to federal disclosure documents.
How else have CEOs of these companies been throwing their support behind Trump?
They've donated millions, to the Trump inauguration committee or inauguration fund.
And they've also just, you know, been sort of vocally in support of him on social media and with their physical presence showing up on the dais.
Right.
Showing up, for the inauguration itself.
And you see, adjacent figures like Sam Altman, the CEO of OpenAI, making these big, big deals with Trump, for AI data centers to be built in the US.
And you even see some of them, including Altman, adopting Trumpian phrases, on their social media posts.
I'm thinking, for example, of Sam Altman, who wrote that these data centers for AI that they're going to build are big, beautiful buildings.
I see the the what you did there with the word big and beautiful there.
Are there any key issues where these so-called Broligarchs, disagree with the Trump administration or maybe each other?
So maybe some distance?
Or is everybody pretty tired at this point?
Yeah.
So I would say the Broligarchs are not a monolith, although they have a uniting ideology underlying a lot of what they do.
There's you know, there's been some tension between them at different points.
You might remember Musk and, Musk and Zuckerberg at one point threatened to fight each other in a cage match.
We've had all sorts of, like, weird disagreements.
There's also, sort of a schism in the pro-Trump camp, around immigration because the tech bros, need immigrants.
They want immigration for for their tech companies.
It's very important to have employees coming through immigration channels.
Whereas the more sort of nativist, white supremacists camp is very firmly anti-immigration.
So you already see some sort of like, schisms that exist.
And we'll see in the coming months whether they cause the camp to implode or how those alliances might shift.
There's been a bipartisan effort within the US government to make social spaces online safer, primarily for children.
Mark Zuckerberg was questioned by lawmakers in Congress, including one year ago when he was urged to apologize to families with children who've experienced cyberbullying.
How might Zuckerberg's evolving relationship with the Trump administration impact efforts to keep people safe online?
Yeah.
Zuckerberg has been going, undergoing quite the transformation, in the past few years.
And you see this the bro part of the Aki, by the way, manifesting very neatly in him.
You even see it physically in his physical makeover.
From the sort of more nerdy tech, bro, to this very, what he would call masculine energy, where he's really gotten really into mixed martial arts and hunting wild boars and leaning into the aggression and saying that American corporate culture now has too much, quote unquote, feminine energy.
You see this relating to MAGA in that, Zuckerberg now seems quite content to be able to shrug off the yoke of being bound to facts or fact checking.
Right?
He has rid meta of the obligation to do the kind of fact checking that had previously done.
And that should be a big concern not only to, you know, definitely like teenagers and youth in America who experience cyberbullying, but also on a much bigger global stage.
Then Facebook now, meta was implicated in a textbook example of ethnic cleansing in Myanmar.
Where it was it was very clearly decided that Facebook had played a role, in inciting hatred against the Rohingya in Myanmar.
And at the time, Facebook said, we really care about this.
We're going to be very scrupulous about hate speech and fact checking on the platform.
But now that Trump has come in and there is this MAGA ideology of we don't need to be bound to the facts, right?
The famous phrase all we can have alternative facts.
I think is emboldening Zuckerberg to sort of throw off that yoke and say it's kind of an anything goes platform.
Zuckerberg, he's been accused, as we mentioned, of not necessarily caring about those children that, folks on Capitol Hill are saying that they're trying to protect.
But, why might he why might some of these other bro look, so to speak, why might they want to be in such good standing with President Donald Trump?
Why might they want to make this move?
So this is going to sound a little odd, but bear with me.
What these oligarchs have in common is that they are deeply influenced by an ideology that is rooted in science fiction and fantasy.
All these folks there, if you ask them what are their influences, they will cite things like Isaac Asimov, major sci fi writer, Star Trek.
Things like that.
And it connects to this worldview where they see themselves basically as the heroes in their own sci fi saga, as Superman or as Marc Andreessen calls it, technologic Superman, where they feel that they should not be bound by common sense morality, or that the laws that apply to all the rest of us, because they are Superman, they are determining their own values.
They're not sheep, they're free thinking.
And so they should be unfettered by the law.
MAGA and Trump is a perfect, ally for them in that sense, because think about it.
He's a perfect avatar for that worldview.
He is a guy who, you know, sort of, famously, notoriously said in reference to sexual assault when you were a star, they let you do it.
You can do anything, right?
So they want to ally with someone who will be sympathetic to this ideology that when you're a quote unquote Superman, you should not be bound by the rule of law.
You should not be bound by democratic regulations.
It's a convenient alliance for them, at least for now.
In there.
And this what makes it, sound, I guess less tin hatty, if that's a concern that you always have when you're trying to, describe, these men and their aims is that there's this obsession, and you outline it in your story with space pretty much for for some of these bigger ones that we're talking about in terms of the names.
Right.
A Bezos, a musk, a Zuckerberg, even.
There's that obsession.
Correct?
Absolutely.
You see this very clearly with Jeff Bezos and his commercial spaceflight venture, Blue Origin.
But you also see it very clearly with Elon Musk, who famously wants to colonize Mars to, quote unquote, save humanity.
He's inspired by the sci fi writer Isaac Asimov.
His whole goal is to sort of save civilization from, you know, what could be a dark age if if, you know, our planet Earth is dying.
And so this is sort of like one of those things where you see, they're seeing themselves as the heroes in their own sci fi story, whether it's Bezos or Musk.
Or the others.
You also see it in their obsession with longevity.
They want to cheat death.
This goes along with what I was saying before about not being bound by any constraints or rules, including the constraints of nature.
Right?
Including the constraints of death.
People like these, these folks have, you know, Andreessen, Thiel, Bezos, have poured money into these tech startups that are doing longevity research to combat aging and death in the hopes that they can one day live forever and not be even subservient to death.
I know you've had to spell it out for me several times, because there has been this situation that we have where we're talking about these tech supermen.
And I know my questions kind of sound similar, but mostly because I'm still shocked at your statement earlier when you had to say that the bro the guards are not a monolith.
Put that in your 2025, bingo card.
But, in in all serious, this is where I, I guess, drop my smile for a second to say that the second part of the title of your piece reads.
It's darker than you think.
Say more about that, please.
So when we just say the part about how they want to go to space and they want to cheat death, maybe you're thinking, okay, that's weird, but like, maybe it could be harmless to to a degree.
What can definitely be very obviously harmful is that, these guards very much want to not be restricted by law and by government.
And so some of them are actually working on startup cities or sometimes called network states.
These are basically like independent mini nations that they're trying to create carved out of the surrounding territory.
Some folks might have heard of Prospera, the startup city that's, been built off, just off Honduras.
And there's others, in the Mediterranean and elsewhere that they're trying to build.
These would be cities or sort of mini nations where traditional democratic rules and governance would not apply.
These oligarchs would get to live only under their own rules, and so would their acolytes who are part of these mini nations.
And that is something that they're trying to use crypto to build up.
Crypto is their financial, tool of choice because it's inherently anti institutionalist.
It's a way of saying we don't need to use banks or government to transact.
We can transact in a free manner.
So this is like really an effort.
That sounds pretty fringe, but right now these are there are buildings being built on these islands, right?
They're actually, gathering millions of dollars to build up these startup cities or network states.
And that is all part of an effort to undermine the idea of a democratic nation state in the world and undermine democratic rule of law.
It's A12 punch of a lot of, things happening all at once, particularly within this sphere.
So I guess I'm just trying to recover and kind of think of a way to move forward here, or at least something to look at in the short term.
So, there's a lot of consequences there.
But in the, in the short term for democracy in America, what is something that we should look out for?
I would say, don't be distracted by the new bottles that this old wine is being poured into.
We have old ideas, that are being kind of repackaged and sold back to us in new forms.
In my piece, I sort of analyze how a lot of this goes back to the old ideas of transhumanism.
And even before that, philosophy of the German philosopher Friedrich Nietzsche.
The ubermensch, meaning the superman.
And transhumanism is a sort of Silicon Valley modern, update of that which says we can engineer our way past any rules, even the rules of Mother Nature.
So thus we don't need to die.
We can live forever.
We don't need to obey the, you know, common sense morality.
Because we're technological supermen, we can engineer our way past anything and be free.
That old idea of the ubermensch is being is old wine being poured into these new bottles that is presenting itself as this very techno scientific, ultra modern thing that's saying, like, look, we're scientific people.
We're going to use science and technology to go into space.
It all sounds very modern, but.
It's really white supremacy.
It's really regressive, and it has roots in these, quite dangerous ideas.
The guy who coined the term transhumanism, Julian Huxley, popularized it in the mid 1900s.
He was not only the guy who coined transhumanism, he was the president of the British Eugenics Society.
And it's no accident these ideas all have, at their core, this core notion that, we can engineer out of humanity whatever we consider undesirable, quote unquote, and we can become this more, glorious species that has this everlasting future or in the stars.
Yeah.
And it has a and it just has a wider, implications for folks not just, of different colors and, and perspectives, but also people, that are disabled as well.
So this is, quite terrifying as we end here, but nonetheless, very important to have this conversation.
Sorry to end on that note, Seagal, but this is very important work.
Where can people find the future podcast?
The future perfect.
Where can people find the Future perfect podcast?
So, you can find the podcast in any of your, your, favorite podcast app.
And you can find Future Perfect online on Vox.com.
It's the Future Perfect section.
That's what I write for.
And this piece in particular about the ideology of the Broligarchs was the arcs have a vision for the new Trump term.
It's darker than you think.
I told you I couldn't promise you levity.
Yes.
We we spoke on on that before the show.
Maybe we were looking at some levity, but at least some of the subject matter does make you lol.
When you look at some of these terms, like bro oligarchs, I've heard nerd herd technocrats, stuff like that.
So, no levity I guess, but nonetheless very important perspective.
So thank you so much for this work, Sigal.
Thank you so much for coming on the Wheelhouse.
My pleasure.
And hit us up with your tech questions, not on how to fix your remote or like your internet, but the ones that have to do with big tech and politics.
You're listening to The Wheelhouse and Connecticut Public.
Thanks for listening to Connecticut Public Radio.
Support comes from the Connecticut Democracy Center, who will honor business community, and philanthropic leaders for their civic engagement at an awards celebration at Connecticut's Old State House on March 13th.
Tickets at CT Democracy Center.
Org.
The first Senate confirmation hearing to decide whether RFK Jr will lead HHS is about to begin.
Kennedy has criticized a number of public health norms, including putting fluoride in water, a nearly 80 year practice in the US.
But what does the science say about the risks and benefits of fluoridated our water?
That's next time on when it.
Coming up this morning at 11.
Support for arts and culture reporting on Connecticut Public Radio comes from Hartford Stage and the Hartford Symphony Orchestra.
Listen for reports on Morning Edition and all Things Considered.
We know you're still recovering from the holidays.
And is anyone really ready to talk about roses and Valentines?
Hey, it's Gavin Ship.
Host of Where We Live.
And I get it, we're doing something different this winter and we need your support.
So hear me out.
No traditional February pledge drive week.
Same classic roses delivered by Friday.
The 14 and more news and conversations that you love.
So visit CT public.org/donate and show Connecticut Public some love.
This is the Wheelhouse from Connecticut Public Radio.
I'm Frankie Graziano.
I has officially entered the chat in Connecticut's 2025 legislative session.
Legislators are grappling with how to identify and regulate potential harms of artificial intelligence technology, while also making the state an AI friendly place to work and innovate.
To help me understand how this is playing out in the state capital and how Connecticut fits into the national context.
I'm joined by Brian Zahn, reporter with the New Haven Register.
Brian, thanks for coming on the Wheelhouse.
Thanks for having me Friday.
Thank you so much for coming on.
Also with us, Meredith Broussard, data journalism professor at New York University.
Meredith, great to have you here today.
Thanks for having.
Me.
And great to see you this morning as well.
Kathryn Hulick, freelance science journalist and author of welcome to the future, Robot Friends, Fusion Energy, Pet Dinosaurs, and more.
Kathryn, thank you for joining us today.
Thank you for having me.
Thank you for coming on, folks who want to join the conversation, give us a call (888)720-9677.
Maybe we can answer your questions.
(888)720-9677.
Happy to take them this morning.
Brian, before we dive into the perspectives of several lawmakers who are involved in this, including Senator James Maloney and Senator Tony Hwang, Tony Hwang, two state legislators who have been vocal about regulatory policies around AI.
Can you add some context on the ways AI regulation might evolve in this year's legislative session?
Well, it's an interesting question, frankly, because, really, I think the story here is that, this is really not their first try at it.
This was brought up in the last session.
And the governor said that he wanted to see more of a national approach.
So really, the approaches that they're taking a more, national 50 state, look at the issue.
So, it'll be interesting to see if they take a more comprehensive approach to addressing, like the harms but also innovation in that sector.
Had the opportunity to talk to State Senator Tony Hwang, Republican, about his concern.
And he kind of outlined a little bit what Brian Zahn is talking about.
Let's hear it.
We have technology innovation that not only looks at Connecticut, but has the whole country, the United States, if not internationally, to compete in innovation.
And are we hampering our businesses or technological innovation advantage and hampering their ability to compete on a national as well as international marketplace in regards to data innovation?
So I think that was where the governor, in his wisdom of taking a step back and saying, let's not race to be the first, but let's race to be the best in understanding artificial intelligence as we try to govern it, but also encourage, economic, entrepreneurial as well as consumer, safety and protection.
So there's, Senator Wong explaining that he's in favor of regulating harms, but wants the state to be conscious of how business may be impacted.
Also, talk to State Senator James Mahoney, a Democrat who's spearheading regulation legislation.
Asked him how this year's bill differs from last year's.
Here's what he said.
I think what's different this year is that, we've had more time to educate people about it.
This isn't new.
We're looking at harms, not a technology, to educate.
You know, people will say, well, discrimination is already illegal, but because of the black box nature of some algorithms, unless you create, a requirement that there's explainability, you're not going to know why that decision was made, and we can't fix it.
And that's a good thing about discrimination in algorithms.
You can test for it and you can fix it.
And so there is an example.
Last summer Lehigh, they had used did a study where they used ChatGPT to make mortgage decisions.
And if you were white and you had a 640 credit score, you had a 95% chance of being approved for a mortgage.
If you were black and had a 640 credit score, you had an 80% chance of being approved for a mortgage.
Yet when they went back and they instructed to avoid discrimination in their decisions, it made equal decisions.
And so that's the importance and why we're saying, you know, we're not saying not to use it.
We want people to use it.
We want but we want it to be safe and responsible and to avoid these disparate impacts.
Senator Moroney, also focusing on the business and innovation, portion of it.
But he's thinking that, people feeling safe, will be able to, get to that ultimate aim.
So, Brian, a lot there.
Unpack it, please, and, help us dig into the nuance that you heard from those gentlemen for sure.
So I think what you're seeing is when you're a state lawmaker, of course, one of your priorities might be to incentivize business to try to prioritize, not being the one state that has a closed door that says, don't come here and do business.
There's a lot of sectors that are, interested in using artificial intelligence for, whatever reasons, whether it be, innovation, efficiency.
So, I think the governor in particular, signaled his intent to veto a bill last session that would he thought, you know, either tip of the spear.
So this time they're taking a look at it in a way that not just regulates the harms.
To try to, you know, what Senator Murray told me is that when you train on models, it's garbage in, garbage out.
So it's not just looking at that and trying to undo the bias in systems, but it's also to innovate and it's also to grow and train the workforce.
You can see it's happening in the educational sector at Yale University has, proposed $150 million over five years.
Also, Charter Oak State College has launched a program to train on artificial intelligence.
So that's what they mean when they say they're taking a more comprehensive approach that I think pulls in business education and also protects residents.
Meredith Broussard, data journalism professor at New York University.
We've thrown a lot at our audience so far and at you as well.
I just want to kind of get a sense from everything that you've heard.
How is this local fight to regulate artificial intelligence, playing out in other states or on the national stage?
So this is the same fight that is happening in every state.
It's the same fight that's happening at the federal level.
In the previous administration, we had this really terrific piece of policy.
We had the blueprint for an AI bill of rights.
We had the executive order on AI and that was a huge step in the right direction.
In terms of, using, scientific evidence, real world evidence in order to craft policy that was going to, make us all safer, in an AI, AI proliferated world.
That executive order, has just been brought back.
And, I think we are in real danger of experiencing harms at the hands of AI systems.
One of the great ways of thinking about AI systems, you know, at the same time that we're talking about all this wonderful innovation is theoretically possible with AI systems, is we also need to think about the idea that AI systems discriminate by default, right?
We tend to talk about AI as being you know, salvation.
We tend to talk about technology as being so exciting and new and correct.
And, you know, we actually need to think about the discrimination in the world.
And the way that that's implemented inside AI systems.
Very important that you brought that out.
I'm gonna have, Kathrynhelp us ground this conversation as well.
How are most people actually using artificial intelligence technology on a daily basis to kind of give us some examples?
Yes.
So the way most people encounter AI is through their devices.
So you Google search now and at the top, most of the time there's going to be an AI summary.
And many people are also using AI as a kind of a tutor to ask questions to help with their emails or their writing.
Even to help with coding, and marketing and social media.
People use it to generate images and video.
And, you know, a lot of the people I meet today aren't using it all the time.
But like I said it, you could be using it without even really knowing it.
If you're just searching something and looking at the top result, that could be from AI, and that causes problems when what it's saying is biased or a lot of the time completely wrong.
A hallucination.
And, you know, that is just how the technology works.
That's a very hard thing to fix because it's kind of, you know, it's fundamental to how these things function.
Want to get a little deeper into how this could impact folks, on a daily basis?
Meredith, you could jump into here as well.
I want to get to the idea of techno shamanism in a second.
But before we do that, can can we understand other ways?
Is this may harm people?
Certain people, surfing the web and things that might pop up in a certain way because of the air that we use.
Can you all, provide some examples?
But one really, really concrete example is, mushroom identification.
Right.
So I, we like to think about AI as being very useful for object AI, object recognition, facial recognition, plant identification.
I really love my, my plant identification app.
On the other hand, I know that it's not really very good at mushrooms.
We have a lot of cases where, an AI powered up will identify a mushroom as edible when actually it's poisonous.
And it'll kill you if you eat it.
They are.
You interpret forager Meredith.
Oh, no, I just, I'm a gardener.
Yeah, but, my parent, my parents are.
So, when you say mushrooms like that, I think about masquerading.
So you brought me back to a a good place there.
But, yes.
Very important stuff.
Potentially for people that are using this, technology.
And now talk to me about techno chauvinism and help me to find that term.
Oh, actually could jump in first if you'd like to.
Kathryn.
Go ahead.
Yeah.
So I had another example from a couple of years ago, it was trendy to use AI to generate an avatar of yourself.
So you pop in your photo and you get back some, you know, fantasy figure or like, you know, an astronaut or, and you can use this on your social media.
And there was an MIT Tech review reporter who's a woman with Asian heritage and she's like, oh, this sounds like fun.
She puts her photo in and gets back nude pornographic images where her male colleagues got to be astronauts or warriors.
She got back this like sexualized content and that revealed that the data that these, you know, image generators are trained on contain a lot of exploitative and harmful images.
And they came out, when she used it because she was misrepresented in the data.
Or people who look like her were misrepresented in this data, and no one knew about it.
Just because the huge scale of images that are being used to train these things, images and text, it's so much it's billions of millions of examples, and no human could go through all that to try to take out the harmful stuff.
So that's where a lot of the harms come from is.
It's just such a insanely, you know, huge pile of data that no one is looking at it to see what bad stuff is in there, and the bad stuff is popping out on the other end when someone just innocently decides to make a, you know, avatar of themselves for fun.
I'm so glad that you did jump in there, Kathryn, because that, translates very nicely, transitions very nicely into what we want to do next.
Talking about techno chauvinism and also perhaps connecting it to what we heard from Sigal earlier with many of the people who might be working on this AI behind the scenes, and it not necessarily being representative.
Can you help me understand that?
Meredith?
Sure.
Techno chauvinism is a term that, that I point a few years ago, and I talk about it in my most recent book, which is called More Than a Glitch Confronting Race, gender, and Ability Bias in Tech.
It took that chauvinism is the idea that technological solutions are superior to others.
And what I argue instead is that we should use the right tool for the task.
And sometimes the right tool for the task is a computer.
Sometimes the right tool for the task is something simple, like a book in the hands of a child sitting on a parent's lap when it's not inherently superior to the other.
But when you dig further into the techno chauvinism and you look at the people who are proponents of it, you discover that it is a core idea, among the bro arcs, and what techno chauvinists are really saying is that math and the people who do math are superior to other people who are not mathematicians.
Because really, what is a computer, right?
A computer is a machine that does math.
Computer scientists are implementing mathematics, which is this amazing magic trick, right?
That we've managed to turn math into something that can make these incredibly complex decisions?
But those decisions are not always consistent with our human social values.
So we really need to be careful about this.
We need to not assume that I just because it's the new hot technology is going to be any good.
And I think that's why we want to do a show like this, because, I think you put it together very eloquently, but it's just not necessary.
Really good business in general.
The people that are, maybe running these companies now having some influence, within our country, making decisions like this based on, their own ideas and preconceived notions, maybe as, as, Meredith is kind of saying, is she doing math?
And some, in some examples, I don't know if anybody wants to follow up on that.
Well, I think we have just a very homogeneous group of Broligarchs.
And what happens when you have a really homogeneous group is that the decisions made by that group, are consistent with the biases inside that group.
This has been a problem for a long time in Silicon Valley because Silicon Valley tends to have a very homogeneous workforce.
They have not, they've paid lip service in the past to, increasing diversity, increasing representation.
But, that has been jettisoned by many, many major corporations, in the past couple of weeks, and the problem is that people embed their own biases in the technologies that they create.
And we all have unconscious bias.
We're all working very hard to overcome it every day, I hope, but we can't see it.
It's unconscious.
And so when you have this homogeneous group of people making technology, they embed their collective biases in the technology and they can't see it because it's unconscious.
And the problem with doing at scale is that we have things like technologies that are, making decisions about whether somebody is eligible for public benefits.
And we have, you know, these inherent biases against, say, people of color.
And so what ends up happening is the assistants get used to deny people the public benefits as opposed to expanding access to public benefits.
Really terrific.
Thank you so much for that, Meredith.
That is, I almost want to clap for you because this is, you know, we bring you on the show because you're, you're, you coined the term, techno chauvinism.
It's very important for us to understand this and, really happy the way you broke it down, as did, Sigall earlier.
And helping us understand how we're all really impacted by this.
I'm really appreciative of the examples that Kathryn gave.
And now I'm going to ask you, Kathryn, to give us some examples of the positive aspects of AI and why we really need it, and ultimately why we'll need it to do it the right way.
First.
Can I give you one example, of a bias getting embedded into the software?
Absolutely.
Because I thought of this immediately when Meredith was just talking about that.
So this was back in the 2010.
Joy, Bill and Winnie was a student at MIT, and she was trying to use AI facial recognition software just to do a project.
And she has dark skin and the software could not see her face, and she had to put on a white mask and get her lighter skinned roommate to help her just to be seen.
So this is literally being not being seen by your computer.
And she has since gone on to found the Algorithmic Justice Justice League, which is an organization working to combat harms in AI.
And she also did a lot of work to get companies to fix their AI face recognition.
So it is better now.
I think it's probably still not perfect, but it's better now than it was then.
But yeah, that's just a fascinating example and a terrifying example of what can happen when the people in the room aren't considering, you know, people who aren't sitting there when their voices aren't heard.
Very terrifying, but very important to understand, too.
And that's why we do this for our audience.
I know that people, might be looking for levity from the show.
I think naturally, I can't help but do it every now and then, and, we all smile and and things like that.
But no, also, we're trying to do journalism here, which is what you all do, to inform people.
So help me understand why we need, AI to work out the right way.
Yes, I do have some positive and hopefully some liberties and positive examples.
I mean, so the 2024 Nobel Prize in Chemistry went to Google DeepMind.
And I think that was a really important innovation.
It was for work on protein folding.
So this matters because I can actually help us find new medicines and helpful molecules that could carry, you know, health care and science forwards.
There's also some positive research around AI chat bots.
I spoke with an expert who'd gone through, you know, dozens of studies and found that the results were mostly positive, that these were helping people to feel less lonely and increasing self-reported, you know, scores of people saying that they are less anxious or less depressed from having this robot to talk to.
I mean, obviously having a real person to talk to is preferred, but a robot might be better than no one at all.
There was another positive study that found that talking with a chat bot could reduce people's belief in conspiracy theories.
And, there's also a lot of examples I found of teachers and students using AI productively.
So I, you know, you hear about kids cheating, using AI, and of course that still happens, but there are teachers out there showing their students how to make like, practice quizzes and how to use AI to understand like a part of their textbook.
Like the teacher's not available at 10:00 the day you're studying for your test.
But this AI is, So in one teacher, you know, how did.
This essentially free tutoring?
That's very, Yeah, exactly.
Or even creatively, she had the kids put in a bunch of craft materials into the AI and say, what can I make with this?
And if the ideas and she said the kids often did not make with the AI suggested, but it got them started, whereas otherwise they might have just sat there, you know, not knowing what to do.
Brian, bring us home, and help us predict, this ongoing effort and how regulating AI is going to play out at the, legislative session here in Connecticut at the lab.
So fantastic question.
And, it's hard to be in the hearts and minds of state lawmakers, but I will say that what, Governor Lamont said last session, it seems as though Senator Maroney has tried to, adopt with fidelity in terms of, working with a consortium of state houses, absent federal policy on AI regulation to, ensure that there's not a patchwork of, of rules.
So, Colorado passed a pretty comprehensive law last session, and now Connecticut is looking to follow.
And very important to say really quickly, they had a task force.
It's Connecticut.
So there was a task force associated with everything.
But Maroney has actually been doing work on this behind the scenes is what you're trying to say, essentially.
Exactly.
Yes.
That's precisely what I'm saying.
Yes.
Thank you guys so much from Connecticut Public Radio.
This is the Wheelhouse.
I'm Frankie Graziano.
You've been listening to Brian's On reporter with the New Haven Register.
Brian, thank you so much for coming on the show.
Thank you.
Brian, you.
Also been listening to Meredith Broussard, data journalism professor at New York University.
Kathryn Hulick, freelance science journalists.
Meredith and Kathrynare going to stay with us after a short break.
Stay tuned to the Wheelhouse.
What comes from live well, redefine life with dementia your way.
Come explore what Live Well can offer during the next on house.
February 15th from 1 to 3.
For details and to register, visit live well.org.
Thousands of Palestinians are returning to devastated.
Homes in northern Gaza.
For about 15 months, the only question we asked was when this war will finish, when we will have a cease fire in Gaza.
So it was like a dream.
But the cease fire almost collapsed to.
This past weekend.
Can it be sustained?
That's on the next on point.
Listen, this morning at ten.
Your PBS favorites made 2024 unforgettable.
From cozy countryside to thrilling mysteries, discover the top five shows our audience couldn't miss.
Watch now at Cty Public Naugatuck 2024.
New year, new resolutions.
Creating a will is an act of care for the ones you love.
With Connecticut Public's partnership with free will.
It's never been easier.
In less than 20 minutes, you can create a will that safe, secure and absolutely free.
Start your year with peace of mind, knowing your wishes are clearly outlined for the future.
Because taking care of your loved ones is the resolution that lasts a lifetime.
Visit CT public.org/visionary to get started today.
This is the Wheelhouse from Connecticut Public Radio.
I'm Frankie Graziano.
This hour we're doing a lot of work for you.
Unpacking such terms as the bro legacy and shedding light on the local arguments for and against artificial intelligence regulation.
Spoiler alert this is happening in state capitals across the country.
But now we'll also take a look at whether any organizations are attempting to fix this.
Also want to let you know that we have time to take your phone calls as well.
Give us a call at (888) 720-9677, (888) 720-9677.
Still with me.
Meredith Broussard, data journalism professor at New York University, and Kathryn Hulick, a freelance science journalist and author.
Again, we'll take your questions.
Give us a call at (888) 720-9677.
Meredith.
The harms associated with artificial intelligence, don't impact all people equally.
Who is most likely to be impacted negatively by AI?
It's really the people who are already the most vulnerable.
So, rich people are going to, be on the receiving end of more positive decisions by AI and where people are going to be, viewed more negatively.
We see this in the example of who, gets allocated mortgages by AI systems or automated systems.
It will by default discriminate.
It will offer, mortgages or more favorable loan terms to white applicants as opposed to, their counterparts who are people of color.
Thank you so much for doing that for us.
I appreciate it very much.
I want to, now ask Kathrynthe next question.
We talked about some of the negative side effects of AI.
Are there organizations, companies working to minimize some of the harmful effects of some of these things that, we were talking about earlier with Meredith and the people that are impacted by this marginalized communities?
Who's working on this?
Well, yeah.
Most major AI companies say they're very concerned about AI safety and AI ethics and have various programs to work on this.
It doesn't always work as they hope.
There's funny examples from early 2024 that Google's Gemini imaging and all they they the company noticed that there was this problem with bias AI imagery that, you know, most of the images coming out were not diverse or, you know, white people were way overrepresented in the images that were being produced.
So they're like, we're going to fix this.
But then people using Gemini found that they'd ask for an image of historical figures, such as the crew of Apollo 11 and get back a diverse group of people, which, of course, is historically inaccurate.
That was not the crew of Apollo 11.
You know, it was it was white men.
And so in response to this, Google actually turned off image generation completely.
And you still can't get the model to make any pictures of people.
So they decided this is too hard to fix.
We're just not going to let our model generate images of people at all.
Which I suppose is avoiding bias and discrimination in some ways, but it's also making your tool a little less useful.
I mentioned the algorithmic justice League earlier.
That's there are lots of nonprofit organizations like that, one that are working towards accountability and safety in AI.
Another interesting example is Adobe Firefly.
So that's another issue with AI image generation is that artists feel exploited.
They say they never gave permission or they definitely didn't give permission for their images to be used to train AI.
A lot of authors are similarly upset.
And so Adobe decided we're going to build our image generator using only licensed or copyright free images, and they have done that.
Artists still aren't completely happy with it because although they were paid, if their images were in the data set, they say it wasn't nearly enough.
And they weren't asked permission in advance.
So it's it's a good start, I think, but there's still work to be done to help creators feel like they're part of this process and not being exploited by it.
Thank you, Kathryn, and thank you for that.
I will say that I am.
Yes.
I yeah.
So my previous book, Artificial Intelligence How Computers Misunderstand the World, was used without my consent, to train, generative AI.
And as a creator, I'm pretty mad.
It's.
Yeah, let's, let's let's give you a little time to to vent that out.
I say, tell us why you're mad here this morning.
You should be, obviously credited for that.
And, did it have unintended consequences really quickly?
Well, what it has is economic consequences for artists.
In the previous world, if somebody wanted to use say something you had written or something you had drawn.
Copyright law came into effect, and, that person had to license it from you.
Right.
So, author is, one of the ways that authors made a living was by, licensing out different parts of, of their books.
And now the generative AI companies have just gobbled up, all of these books and are using them to, train their models and then make a bunch of money without compensating the creators.
And that's just that's a pretty significant economic harm.
It's terrible to all that work that you put into it.
And, and, I can't imagine what that was like for you.
So I apologize, to you this morning and, thank you for sharing that, experience.
It's a it's a terrible, thing to go through.
I only have a few minutes left on the show, so I just want you to help me out with the energy intensive factor with AI and the impact that it has on our environment.
At this point, do we know AI's impact on our environment?
Meredith.
Well, we know that it's bad.
We do have a process called algorithmic auditing, right?
Which doesn't, doesn't really roll off the tongue, but it is the idea that we can audit or examine AI systems to discover, what are the ways that they're discriminating.
And then we can also apply this to energy use.
We can look at AI systems and estimate, okay, how much energy are they using in order to train these models?
How much are you using every time you do a query?
And so rough estimates right now, I suggest that doing a, you know, say, a ChatGPT query, requires about as much water for cooling as you would have in a small bottle of water.
So it's basically like taking that small bottle of water and pouring it out on the ground.
If you think about how.
Your laptop.
Works, when you have it on your lap for really long time, it gets very warm, right?
You usually have to move it off of your lap.
Well, that's how all computers work.
And so these supercomputers that, that power AI systems, also get really hot.
They have to be cooled down with water, with air conditioning.
The water can be recirculated, but only about four times, because then it gets, to, you know, clogged with minerals, to warm.
And it has to be discarded out to the water treatment plant.
And so the AI companies are using as much water as, say, an entire state.
And that's not good for our environment.
I hope that you could all keep your calendars open for us in the future, because there's so much for us to dig into that unfortunately, we're out of time and won't be able to get to.
You've been listening to Kathryn Hewlett, freelance science journalist and author of welcome to the future, Robot Friends, Fusion Energy, Pet Dinosaurs, and more.
Kathryn, thank you so much for coming on the show.
Thank you for having me.
And, of course, Meredith Vassar, data journalism professor at New York University and author of More Than a Glitch Confronting Race, gender, and Ability Bias in Tech.
Respect Meredith's work.
People compensate her for the work that she does, as well as Katherine's as well.
Thank you so much for coming on the show, Meredith.
Thank you.
Today's show produced by Chloe Wynn.
Great job.
Chloe.
It was edited by Robin Doyen Akin and Meg Dalton.
Technical producer is the maestro Dylan Reyes.
Download The Wheelhouse anytime on your favorite podcast app.
I'm Frankie Graziano.
This is the Wheelhouse.
Thank you for listening to me.
And.
Again.
Frankie, it's been a pleasure.
Thank you.
All right.
It was really a pleasure.
Call any time.
Okay.
Thanks a lot.
Bye bye.
Thank you.
- News and Public Affairs
Top journalists deliver compelling original analysis of the hour's headlines.
- News and Public Affairs
FRONTLINE is investigative journalism that questions, explains and changes our world.
Support for PBS provided by:
The Wheelhouse is a local public television program presented by CPTV