07.29.2025

“AI and the Trust Revolution:” How AI Impacts Who and What We Trust

Yasmin Green, CEO of Jigsaw at Google, and Gillian Tett, columnist for the Financial Times, discuss how AI is particularly impacting Gen Z—and how more trust is now being placed in AI chatbots than in traditional leaders and institutions. Green and Tett join Walter Isaacson to explore how policymakers can leverage AI’s “digital boost” to help bridge the growing trust gap.

Read Transcript EXPAND

BIANNA GOLODRYGA: We turn now to a topic everyone is talking about, the dangers and potential of artificial intelligence. This rapidly advancing technology is raising concerns around the world and is already becoming an unavoidable part of modern life. So, do we trust it? And how is it impacting our relationships? Well, that’s the question our next guests are examining. Yasmin Green is the CEO of Jigsaw, a program supporting emerging technologies at Google, and Anthropologist Gillian Tett is the head of King’s College Cambridge. They join Walter Isaacson to discuss their recent piece for Foreign Affairs, exploring how A.I. represents a drastic change in human connection.

 

WALTER ISAACSON: Thank you Bianna and Yasmine Green and Gillian Tett, welcome to the show.

 

GILLIAN TETT: Great to be with you.

 

ISAACSON: The two of you have just written a piece in Foreign Affairs about trust and the AI revolution. Let me start with you, Yasmin, with a very basic question. Are AI chatbots replacing search? You work at Jigsaw, which is a unit of Google, so you’re probably best placed to answer that.

 

YASMIN GREEN: Well, the, the paper actually looks at the evolution of trust over time, starting with when we lived in small societies. We talk about eye level trust as we started to settle down and live in larger societies, we talk about the evolution of vertical trust and institutions, the internet introduce this kind of new trust phenomenon that we call distributed trust. And then we talk about, well, what’s happening now in this new era of AI and eventually AGI and we’re kind of exploring how, well, it’s often said that trust you know, we don’t, we are in a trustless world where post trusts your trust is evaporated. And Gillian and I have really explored another possibility, which is that that trust has migrated and we are looking to, to new trust norms.

 

ISAACSON: And is this something specific with AI and large language models, or is this true for the whole internet phenomenon at the moment?

 

GREEN: Yeah, so the, the, that third trust era, the distributed trust, that was really what we saw with, with social media and the internet sometimes called web 2.0. This idea that you would get into a cab with a complete stranger, you know, get into a car with a complete stranger, or go and rent accommodation from someone you’ve never met or, or, or, you know, trade with, with people that you don’t know. That was all enabled by the internet. This idea that platforms enabled us to, to have some trust, like what is trust in that con you know, in that context it is a feeling that you, you know, belief in the reliability of something in whether it’s trade or transport, et cetera. So we saw something really that, that transformed who and how we trust with the internet. And what we explore in the, in the piece is how this AI era, this era of like chatbots that we can talk to, that talk back to us, that are available 24/7, that are personalized, that are private, that that really is ushering in a new trust era beyond that of the internet.

 

ISAACSON: Yasmin has just talked about Gillian, this notion of peer-to-peer trust that we see in Airbnb and Uber. She mentioned those things. Tell me how distributed trust works anthropologically. 

 

TETT: Well, I’m an anthropologist by training. And the key point is this: People know that anthropologists study culture and they sometimes think that culture is a bit like a Tupperware box in that it’s sealed and static and you can stack up different cultures on top of each other in a hierarchy of value, and of course assume that your own culture is the most valuable. But anthropologists actually have a different vision of culture, which is that it’s more like a river that’s slow moving and constantly changing with new streams coming in. And that applies to the issue of, say, you’ve mentioned earlier about search versus bots. I mean, the reality is that our digital culture is in constant flux. And right now the question of who we trust is also in flux. And traditionally, anthropologists assume that there were two axes of trust that glued societies together, either face-to-face trust, if you like, eyeball to eyeball peer group trust, or when groups got really big, you had to trust in vertical axes in leaders and institutions because you couldn’t eyeball everybody. But now digital links are essentially creating a whole new type of trust, which is peer-to-peer trust on a massive scale, that’s not bounded by geographical proximity anymore. And that’s really at the heart of what we’re arguing, that our culture, our digital culture is slowly evolving like a river. And right now that ver– that horizontal trust, distributed trust is incredibly important to think about.

 

ISAACSON: But when you talk about peer-to-peer trust, you’re talking about interpersonal, in other words, human beings distributing trust. How has that changed by AI in the equation?

 

TETT: Well, here’s a key issue that, as I say, distributed trust has emerged as a really important platform, really since the internet took hold. And that’s underpinned things like the rise of Airbnb or Uber or many other tools that we use today. But that doesn’t stay still either. So introducing AI into this equation is creating a new form of trust where essentially you can have AI acting, as I say, like the four Ms, you can have them – our vision of AI has historically been that its a master is coming to us from a vertical axis, bossing us around and telling us what to do. Or you can have AI as this kind of mate, one of our gang, one of our friends. Or you can have AI as a mirror to ourselves. Or you can have AI as a moderator, where essentially they’re using all of the digital tools available to moderate conversations between human beings and they can all play a role today. So it’s not just a question of whether or not we trust AI, it’s really about, do we trust the idea that AI tools might actually make it easier for humans to trust other humans, or at least to interact with them in a way that could build trust.

 

ISAACSON: Now, Yasmin, I think one of the main outlets from what I read is TikTok. Of course we all use it, but Gen Z uses it probably more than the rest of us. Explain how that changes things.

 

GREEN: If a Gen Z consistently reads certain news outlets, it’s because they believe in individual journalists, as opposed to – they’re following the individual journalists as opposed to the brand of the institution, of the media outlet. And so that kind of helps you understand the rise of newsy social media influencers, news influencers who are delivering their interpretation of the news. It’s not you know, investigative reporting. It’s really opinion and, and off the back of actually the investigative reporting of, of news institutions. 

But the, that the, the reason that social signal matters so much to Gen Z is that they believe in people who look and sound like them. influencers that go viral on social media. They speak to you like we’re in a one-on-one conversation with words you understand, with references you understand and there’s a trust, there’s an authenticity. It’s almost like the tension between authority and authenticity. That authority used to trump everything in a vertical trust world with institutions and increasingly, especially Gen Z, really subscribed to authenticity as a, as a reason to trust.

 

ISAACSON: So Gillian, I’m gonna ask you to follow up on that, which is, normally when I try to get news or information, I do a query or a search query or a chatbot prompt that I wanna know the truth. I’m a truth seeking query. But Yasmin seems to say that in the study of Gen Z, they’re looking for more affirmation, social affirmation rather than just pure truth seeking.

 

TETT: Well, you’re absolutely right, Walter. I think any one of our generation tends to operate in the same way that you are talking, which is basically to look to experts or expert sources for validation of our ideas. Now, one of the key precepts of anthropology is that you cannot ever assume that the entire world looks at the world in the same way that you do. And although we are all hardwired to assume that the ideas we grew up with are normal, unchanging, and correct, and everyone else should look at the world like we do, thinking that is a complete fallacy in today’s world. Not just because we live in an age of globalization, but because different generations have different approaches. So the reality is that Gen Z doesn’t trust experts and institutions in the way that you and I were reared to do. Which is horrifying to anyone who’s an elite person in an institution or of our generation.

But the reality is there’s good and bad things about what’s going on. The good thing is it’s potentially very empowering. It’s potentially, in some ways, more democratic and you can potentially get a much wider range of voices in this type of debate. The bad thing is it can make it a very chaotic cacophony of noises, very hard to have serious conversations about policy trade-offs. And of course you can have both a scope for a tremendous amount of attention deficit disorder, but also tremendous amount of misinformation in some context. That’s the dangers. 

And neither Yasmine nor I are in any way sugarcoating what’s going on or denying any of the dangers that are associated with the explosion of AI bots and the risks that trust gets abused. And we’re both really strongly arguing for a sensible conversation to mitigate those dangers. But, but, but, but there are upsides too to the way that Gen Z is approaching this, that frankly we can all learn from. And whether you love or hate it, the reality is this shift is occurring and you cannot afford to ignore it.

 

ISAACSON: Does this shift lead to more conspiracy theories, Yasmin? 

 

GREEN: Yeah, yeah. There’s a, there’s a, a paper that we talk about in our own essay that was done by researchers at MIT and elsewhere. So it’s called they like, actually the, you can go and find out online, it’s called “Debunk Bot.” And they used LLMs. So now let’s talk about what’s powerful about this LLM moment and where, what are the, what where’s the promise that we should be investing in as we try to make the LLM era, you know, work for us and as individuals in our society. And they, they recruited for conspiracy theorists and they gave them some back and forth interactions with a chatbot. And they found that after just two or three back and forths people on average recorded having 20% less strong beliefs in conspiracy theories, and that those effects endured over months.

And the interesting thing – so people would explain why they believed in chem trails. You know, I can see the exhaust of the plane and I know the government engages in mind control. And then the LLM went back and forth and talked about, well that’s, you know, it’s, it’s called contrails, it’s condensation. And just engaged with them. The, the quotes were so powerful about people’s experience during the studies where they said, well, I’ve never had such a, you know, helpful conversation where someone’s explained it to me so well. And that’s because people have, you know, we associate that others have an agenda when they’re trying to communicate with us. This is like the, the some of the, I think, prejudice against experts or institutions. And that they felt the chatbot was neutral. They felt the chatbot was a, a safe space for them to really have, you know, back and forth engagement.

What we talk about in our paper is some of the promise that, like, how can we use chatbots? One of the examples which is from the University of Bingham, some academics who we’re supporting at Jigsaw is they’re using AI to help people communicate who would, who otherwise disagree.So you think, you know, one of like the, the most incredible uses of AI that it’s so incredible and so prevalent, we hardly refer to it as AI anymore, is language translation. Google Translate, helps, you know, you can speak to anyone in the world despite not knowing the same language. Well, that’s not really the cause of most conflict is not that we don’t speak the same language, is it’s actually that we, we don’t have the same worldview. The reason that dialogue breaks down is that I don’t know where you’re coming from and I can’t explain to you where I’m coming from in a way that lands. So this study is looking at the power of LLMs to enable people not to do language translation, but social language translation. Can I be ex– can your viewpoint be explained to me in values that I understand and, and back and forth.

 

ISAACSON: Well, wait, wait. Give me a concrete example of that, like what you did in Bowling Green, Kentucky.

 

GREEN: Okay, well, bowling Green is, is our latest project at Jigsaw directly where we used LLMs to bring the entire – so Bowling Green is the fastest growing town in the amazing state of Kentucky, doubling in size over the next 20 years. Their judge executive, the, essentially, the mayor Doug Gorman, realized that this change was, you know, was really going to upend society if he didn’t bring people together to help them chart their future. But there is no way to bring a hundred thousand people together and and have a single conversation, right? The internet actually didn’t deliver that for us. Like, social media didn’t help us have very productive, large conversations. 

So we worked with Doug Gorman, innovation engine and, and and, and local leaders in Kentucky to to basically do an AI enabled town hall. So previously when they had done town halls in Bowling Green, Kentucky, they had about eight or nine people attend. When we worked with them to invite anyone to participate in a virtual town hall, they had 8,000 people. So a thousand times the participation, a million opinions expressed, thousands of policy proposals, and we used AI to help the judge executive make sense of it all. And the thing that was so stunning, other than the themes of the things they wanted was that for more than half of the, the policy proposals there was near universal agreement. So it’s like we can, you know, I think we, we’re in this world where we, we felt that social media hasn’t really served us in bringing us together as people. But, but I think the LLM era holds promise to do that.

 

ISAACSON: Gillian, the whole notion of social media, the digital revolution LLMs, was that it was going to connect us. It was gonna make us more united. That clearly didn’t work. From an anthropological perspective, what causes divisiveness and is that built into these technologies or even the business and algorithms, these technology use?

 

TETT: Well, that is such a great question because I spent a lot of time talking to the founders of Twitter, and there, when they originally created Twitter, they imagined it as being a bit like a sort of cyber bar where everyone just kind of bumped into everyone else and hung together as a great big happy mass. And one of the reasons they actually had that little bird Twitter on Twitter was because they imagined everyone flocking together in a great big happy group sort of singing kumbaya. 

And the reality is that the minute that Twitter became so popular that you had vast numbers on the platform, you began to have massive fragmentation, which is really about this fundamental human urge to hang together with people who look like you and to choose who to hang with. 

Now, in the real world, you can’t do that most of the time because you are essentially constrained by geography, your workplace, your family. Your identity is kind of handed to you, and so is your social group. But the crucial thing to understand about the internet is that when we go online, we have, for the first time in history, the ability to fashion identity exactly as we want, and to customize, to pick and mix exactly who we want to hang with and how we present ourselves. It’s the ultimate pick and mix tool, if you like. And people tend to pick and mix that themselves into tribes that reflect the real world tribalism, but actually intensify it because of this customization.

 

ISAACSON: Well, wait, wait. Is that a good thing or a bad thing?

 

TETT: Well, it’s both, like all innovations have a good side and a bad side. Whether you’re talking about electricity, nuclear power, anything else. And the good thing is it feels empowering. You can find your peeps across a really wide geographical area. So people in the past who might have felt quite isolated because they were the only person in their village who was like them, can suddenly feel a sense of solidarity. 

The bad side is that people start self-selecting into tribal groups. And because big tech is able to create an architecture that essentially sends you down rabbit holes of your own choosing, that has reinforced the tribalism dramatically. So what we’re trying to say is we are not sugarcoating the dangers of what’s going on whatsoever. We all know about that. You know, we’ve all read Jonathan Haidt’s book on the Anxious Generation, all the other things out there pointing the dangers. And AI brings dangers. That’s definitely out there. 

But there’s another side of it, which is that, as Yasmin says, AI bots may be more neutral in a world of human tribalism than we realize. And that might start to have really interesting positive benefits. If we master our use of AI and the bots rather than are mastered by it. And maybe one way to start is, instead of saying artificial intelligence, which sounds like something which is imposed on us, often from the top, we start talking about augmented intelligence or accelerated intelligence, which gives the idea that we as humans can use these tools for both a sense of agency and possibly even for good. To unleash the angels of our better nature, if you like, rather than the demons that we’ve had so visibly on display in recent years.

 

ISAACSON: Yasmin Green, Gillian Tett, thank you both for joining us.

 

TETT: Thank you.

 

GREEN: Cheers.

About This Episode EXPAND

Jeremy Diamond reports on the latest humanitarian situation in Gaza. Rep Jake Auchincloss (D-MA) discusses Democrats’ strategies ahead of next year’s midterms. Margo Price releases a new country-pop record. Yasmin Green and Gillian Tett explain how Gen Z trusts AI chatbots more than traditional leaders and institutions.

LEARN MORE