
Story in the Public Square 8/17/2025
Season 18 Episode 7 | 27mVideo has Closed Captions
Tackling a crisis in critical thinking with Alex Edmans.
Scholars, journalists and even some politicians warn about the lack of critical thinking in contemporary life. Finance professor Alex Edmans picks up that alarm and warns we're regularly exploited by those who would use our own sloppy thinking to mislead us. Edmans discusses the harmful implications that a lack of critical thinking can have on us all.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Story in the Public Square is a local public television program presented by Ocean State Media

Story in the Public Square 8/17/2025
Season 18 Episode 7 | 27mVideo has Closed Captions
Scholars, journalists and even some politicians warn about the lack of critical thinking in contemporary life. Finance professor Alex Edmans picks up that alarm and warns we're regularly exploited by those who would use our own sloppy thinking to mislead us. Edmans discusses the harmful implications that a lack of critical thinking can have on us all.
Problems playing video? | Closed Captioning Feedback
How to Watch Story in the Public Square
Story in the Public Square is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- Scholars, journalists, and even some politicians often warn about the lack of critical thinking in contemporary public and private life.
Today's guest picks up that alarm, and warns that we're regularly exploited by those who would use our own sloppy thinking and unconscious biases to mislead us.
He's Alex Edmans, this week on "Story in the Public Square".
(pleasant music) (pleasant music continues) Hello, and welcome to "Story in the Public Square", where storytelling meets public affairs.
I'm Jim Ludes from The Pell Center at Salve Regina University.
- And I'm G. Wayne Miller, also of Salve's Pell Center.
- And our guest this week is Alex Edmans, a professor of finance at the London School of Business.
He's also the author of several books, including the recently published "May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases, and What We Can Do About It."
He's joining us today from the United Kingdom.
Alex, thank you so much for being with us.
- Well, thanks so much for having me.
It's great to be here.
- So when I reached out to you about the book, I said, you know, that "The spirit of this book really captures the spirit of the show, trying to encourage some critical thinking."
Why don't you give us a quick overview of the book, and then we'll get into it in some detail?
- Certainly, so why did I write the book to begin with?
So my day job is a professor of finance.
But I don't just do research to sit in some academic journal, I try to use it to influence the practice of business, the way executives run companies, or investors invest money.
But when I use research, be it my own or other peoples', often how people will respond to the research depends on whether they like it, not its accuracy.
So if there's a study they like, it's the world's best study.
And if it's a study they don't like, they say, "Well, it's just purely academic.
It has no relevance for the real world."
So what I wanted to do was to highlight our biases, in particular confirmation bias, that cause us to interpret something based on its appeal and not its accuracy.
And highlight simple steps that we can take to ensure that we are not falling for fluff, and that we are basing our decisions on the most relevant and most accurate information.
- Yeah, so how pervasive, in your estimation, are misstatements?
I'm gonna give some folks the benefit of the doubt, misstatements of truth, but also, intentional misinformation, disinformation.
How prevalent is that in the way we make decisions about policy, or even the way we live our lives?
- In terms of the second one, which is intentional misstatements, disinformation, I don't think it's that prevalent.
There are a few bad actors, but I think many people are generally honest.
But what I wanted to highlight in my book is that disinformation is only a very small part of misinformation.
There can be many times where intended good actors are unintentionally spreading misinformation.
And I'm somebody who did that myself.
So I go through in the book examples of when I've taught things to my students, which I thought to be true.
And I took from what I thought to be trusted sources, but I was actually incorrect.
So this is something which I think is hugely pervasive.
It even affects the best and brightest among us.
And that's why I was motivated to write the book to highlight this problem, which I think is widespread.
- So you mentioned confirmation bias, what is it?
And can you break that down for us please, Alex?
- Certainly, so this is the idea that we have a preconceived view of the world.
And if we see something that confirms that view, we lap it up uncritically.
And if we see something that contradicts it, we will dismiss it.
So let's give some examples.
So let's say I'm a believer that climate change is a hoax.
I'm not, this is just an example.
But then if I saw a study which claimed that climate change is indeed a hoax, and it is being spread by some lobby groups, I will tweet this from the rooftops.
I will share this with as many people as possible, without actually checking that it's true.
And I see something that is an inconvenient truth, I will dismiss it.
So one example was the Deepwater Horizon disaster, where BP had an oil rig which spilled billions of barrels of oil into the sea.
So why did they do this?
Why didn't they check that it was safe?
Well, it did check, it performed the standard test, known as a negative pressure test.
This test failed many times.
And then what the engineers did is they engaged in what's known as motivated reasoning.
They came up with some reason to dismiss those tests because they didn't want them to be true.
And this gave them an excuse to run a quite different test.
And that test passed, they thought the rig was safe.
And therefore, that led to the disaster.
- So you've said that people have preconceived ideas.
Where do those preconceived ideas come from?
I'm guessing it's their upbringing, where they live, their education?
But can you get into that in a little bit of detail, because I think our audience would really like to hear about that?
- You are absolutely right.
So a lot of them can come with your upbringing, just the company that you keep.
And you might think that that does make sense for strong political viewpoints.
So views on immigration or gun control, they might depend on whether you grew up in a more left-wing or right-wing family.
But what I wanted to highlight in my book is that preconceived viewpoints can be quite subtle.
So, again, another example of when I myself fell for misinformation was before my first child was born.
So I went to parenting classes.
And while many of these classes were about important things, like how to change a nappy, there were two modules exclusively on breastfeeding.
And they were saying that the World Health Organization argues that you have exclusive breastfeeding for the first six months.
And so I believed this, not because I have a huge strong ideological bias, but because I have a more subtle bias, right?
You are brought up to think that something natural is better than something manmade.
Natural flavorings are better than artificial flavorings.
And so, it seemed quite natural to think that breastmilk is better than some formula concocted by some giant corporation.
But actually, when you look behind the curtain, you find that the link between breastfeeding and health outcomes is not causation, but correlation.
So it's not that breastmilk causes the better health outcomes.
But often, the women who choose to breastfeed are women with a more stable home environment and family support.
And it's those factors that are behind the positive health outcomes.
So why does this all matter?
Because sometimes the message that we are given, that breast is best, that guilt trips mothers into thinking you're a bad mother unless you exclusively breastfeed.
So when I looked at the evidence, I realized, no, I could help, as a father, with feeding.
I might sometimes use the bottle.
That might allow my wife, who's exhausted, to rest.
And so, when you look at data and evidence, sometimes you might think, well, I'm a data stringent person who says you have to live your life by this particular book.
It's actually the opposite.
Actually, when you realize that data is sometimes fragile, that allows you to live more freely than the black and white messages that we are sometimes given, such as breast is best.
- So one of the things that's remarkable about the book is that you really almost scaffold your own thinking for the reader, to help us walk from the idea that, you know, a statement is not a fact.
All the way up to, what do you do with facts?
And when does that graduate all the way to the level of evidence?
Let's start with that first dichotomy though.
A statement is not a fact.
What's the difference?
- Yeah, so we like to quote statements all the time.
So we might say, 10,000 hours was the secret to success because Malcolm Gladwell, a "New York Times" bestselling author said so.
Or we might say, "Culture eats strategy for breakfast," because Peter Drucker, a famous management guru said so.
Or we could prescribe opioids worldwide because there was an article published in the New England Journal of Medicine, which said addiction is rare in patients treated with narcotics.
Now, disinformation 101 says all you need to do is to check whether the person actually made that statement.
Because we know that everybody from Mark Twain to Aristotle, they sometimes get misquoted.
But what I'm trying to point out is that is not enough.
Even if the statement was actually given by the person, it may well be misleading.
For example, the statement that addiction is rare in patients treated with narcotics.
That was indeed the title of an article in the New England Journal of Medicine.
But this was not a scientific study, this was a letter to the editor.
And the opioid epidemic, which has led to 2 million deaths and it's about 600,000 within the US alone, that has been based in part on the belief that you can give people opioids and they're not gonna get addicted.
So what's really important is not just to check yes, did the person actually make that statement?
But the context of the statement.
And even if you think, well, the letter to the editor was genuinely true, actually the study they looked at was hospitalized patients.
And it may well be that narcotics are not addictive when given in a hospital, which is a controlled environment.
But when given as an outpatient, they could be deadly.
- You know, a lot of this seems to me like it has a lot to do with the ease with which we can access information in the modern information ecosystem.
And essentially, I think what you're saying is that information is not the same as knowledge.
And it's certainly not the same as wisdom.
Could you elaborate on that a little bit?
- Yeah, so we can easily access knowledge, but actually, let me refine that.
We can easily access statements.
So the statements that we hear are short and punchy.
They could be tweeted in 280 characters.
- Right.
- So something as simple as addiction rare in patients treated with narcotics, that has a clear takeaway.
And that's easier to remember than actually, narcotics are fine for hospitalized patients.
But they might not be fine if you're given this as an outpatient.
That's a lot of additional levels of detail which make things more complex.
So we like to live our world with simple black and white rules because the more black and white it is, then the easier it is to remember.
But this masks a lot of complexity.
So I think the mostly statements, but that is not the same as knowledge.
And I think another term for knowledge is discernment, is when to apply a rule blindly.
But also when to recognize that actually, the rule doesn't always apply.
Just like with spelling, yes, it's I before E, but except after C and other types of contradictory examples.
- You mentioned the New England Journal of Medicine, generally considered a tremendously well-respected, perhaps the leading medical journal in the world today.
Enter the era of artificial intelligence and chatbots.
How do they compound how we're gaining access to information and statements?
- They can compound it because often, we think that this is a reliable source.
So people argue that artificial intelligence is even more useful than humans.
So if AI says something, then it absolutely must be true.
It's free from the biases that humans have.
But this is not the case because artificial intelligence does have biases embedded, because what AI might look at is what the consensus is out there.
And what is out there might have been widely spread because it's what people want to be true.
Again, let me give you an example.
So what is the link between diversity and financial performance of a company?
People would love to believe it's hugely positive.
I would myself, as an ethnic minority.
It seems progressive to argue that the more diverse a company's board, the better the performance.
And there are studies by the likes of McKinsey and BlackRock which claim this.
But if you were to take a look at the evidence with even half a critical eye, you can see that the evidence is extremely weak.
For example, McKinsey measures financial performance and diversity in quite different periods.
It's in fact financial performance that leads to diversity, rather than the opposite.
But if you were to put into ChatGPT, and I've done this because some of my research is on diversity, what is the link between diversity and performance?
It will trot out the McKinsey study, why?
Because that's what it sees as being circulated in the electronic atmosphere.
And therefore, it thinks that this is scientific consensus, when it's in fact what people have been circulating because they want it to be true.
- So can you expand on AI and the impact it has on people?
If you do a Google search now, and this is relatively new, but if you do a Google search, the first thing that comes up is an AI interpretation of what your question was.
And so, that's what you see first and foremost.
And I myself have gone sort of down that rabbit hole.
It's like okay, it's AI and I'm trusting Google, you know, knows what they're doing.
And it's probably correct.
I then later fact check and figure it out.
But talk about that, that's a revolutionary step I think.
- Yeah, certainly, and with AI, again, my views like my views on many things are nuanced, rather than black and white.
So there are some people who say AI is completely revolutionary, we should put all our trust in it because it avoids human error.
And others who say, well, AI is biased, algorithms are biased, let's ignore it.
I think AI is a useful tool, but it is only a tool.
I would view AI as like a handy and prompt research assistant.
But we would never completely delegate our decisions to research assistants.
Whenever I get an answer from a research assistant or, let's say not an academic environment, let's say you've got a junior within your company at work, you won't take what the junior says at face value.
You will then ask it some further questions.
So when I asked diversity to ChatGPT and I got given the answer, I then probed it and I said, well, these studies were not published in top peer reviewed journals.
Can you give me the studies which are the ones that have passed scientific scrutiny?
And it gave me a quite different answer.
So with your example of the Google search, well, let's look at the answer it gives you.
And sometimes that is helpful because what that has done is given then a more targeted answer than if you just had Google without AI.
But sometimes you might want to follow up and say, well, what is your source for this particular result that you are giving me?
Just to make sure that it has not given a hallucination.
And then when you look at the source, then try to check the reliability of the source.
Is it something like a McKinsey study, which hasn't been peer reviewed and vetted by anybody?
Or is it something published in a top scientific journal?
- You know, so you mentioned, you know, data, facts, evidence.
There's a wonderful discussion here about the value of the scientific method in the current era.
What do we do though with outlier cases?
So, you know, cases that are beyond what the model predicts about a certain set of phenomena.
How do we interpret those as, you know, with the sort of clarity that we need to be able to make decisions about everything from climate change to, you know, which car we're gonna buy?
- Yes, and again, I think it's to think about models in the same way that I've suggested thinking about AI.
Models are really helpful, but models do not capture every particular nuance and every particular complexity.
So with a model, what you might be trying to do is to look at general average effects.
So a flight simulator or a city simulator will show that if something changes here, this is how things will respond on average.
But sometimes we don't see average cases.
There could be some extreme situations there.
And it's to recognize, well, why do we want to sometimes override the model?
And this is what great expertise teaches us.
This is why pilots might generally use autopilot, but they will know when it's actually useful to overrule that.
If you get around using Google Maps, sometimes you will not follow what Google Maps is telling you if indeed your experience highlights that this is something where they might be giving you the wrong answer.
So models are useful guides, but they should not replace human judgment.
And I think some of the issues that we see with AI, often we like to blame the AI for not being perfect.
But I think this is not fair because AI can't be perfect.
It typically looks at what happens on average.
And sometimes the blame should be going to the human who might be treating it as if it's perfect when it's not.
- Well, and I think that you're also making a case for our own level of individual responsibility on some level as users and consumers of information, data, evidence, is that fair?
- Yeah, that is fair.
And I think this is something which is not new to just AI.
It is something that we should apply to any time we have a particular tool.
So let's say when they first introduced psychometric tests, did we use psychometric tests as the only thing in order to assess who to hire for a company?
No, they would be partially informative.
But also you would want to interview the person and get your own judgment.
Do we admit people to college just based on the standardized test scores?
No, that's one useful bit of information, but we also want to use our judgment here.
And so, this is the same with AI.
That can be useful, but it should not be the exclusive thing that we're relying on.
- So, Alex, you get into a number of bestselling self-help and management books over the last many years.
I want to talk about some of them.
Start with the Atkins diet, tell us about that.
- Absolutely, so the Atkins diet was a way to try to lose weight.
And so, what it suggested is that we do this by avoiding all carbs.
Notice the words, all.
So this was all types of carbs, not just refined sugar, but complex carbs as well.
And it said avoid all carbs.
So try to get as close to zero as possible, rather than saying ensure that carbs are somewhere between 30 and 50% of your daily calories.
And so, what this plays on is a bias known as black and white thinking.
So what is that?
So let me first contrast it with confirmation bias.
So confirmation bias, as we discussed earlier, is if we have a preconceived view of the world, we latch onto something that supports that viewpoint.
But there might be certain issues where there's no preconceived view.
So maybe I have an existing view that protein is good and that fats are bad, but carbs are somewhat neutral.
So that could go either way.
So what black and white thinking says is, even if you have a no preconceived view, we view the world in black and white terms.
So if a study says that something is always good or always bad, then we're more likely to believe it.
And so, Atkins' idea that carbs are always bad, we should avoid them completely, any type of carb.
That is number, one, attractive, given black and white thinking.
And number, two, easy to action because all you need to do is to look at the carbs item on a nutritional information.
You don't need to figure out whether that's simple carbs or complex carbs.
Notice that had Atkins given completely the opposite advice, which is eat as many carbs as possible, he might have also gone viral.
Because this also plays into black and white thinking, it's easy to implement.
So to pen a bestseller, which Atkins' book was, Atkins did not need to be right, he only need to extreme.
- So doesn't this speak to the desire that many people have, maybe most people have, for a fix, a quick solution to a problem?
Is that not part of human nature?
- You've hit the nail on the head.
We want something which is quick, we want something that is easy to implement.
And we also like to look at single solutions.
So if the way to lose weight is to eat better, to avoid alcohol, to exercise regularly, to try to avoid stress, to sleep seven hours a night, that is quite complex.
We need to ensure that we're monitoring all of those things.
But if it's just the one thing that we need to do, let's say eat more blueberries or just avoid all carbs, that is something which is quite simple.
And so, some of these self-help books, which you rightly say I highlight the fragility of it in my own book, are things which say, well, there's only one thing that you need to do.
Be it avoid all carbs or practice for 10,000 hours, or find your why, the real world is much more complex than that.
So you might think, well, am I being unfair?
Because even though the real world might require you to do 10 things, isn't doing one thing better than doing zero?
And I think it can be problematic because if you focus almost exclusively on that one thing, it might blind you to the fact that we do also need to look at those other things.
It's not that you're getting that one thing for free, because the time that you spend devoting your attention to one issue might be time spent away from the others.
And even that one issue may be much more nuanced than you might think.
Actually, what medical advice suggests is that carbs should be 30 to 50% of your daily calories, not zero.
That certain carbs, like complex carbs, are actually good for your health compared to refined sugar.
- So there's so much richness in this book, that we could spend in the next week talking about.
But I wanna talk a little bit about research that goes looking for solutions.
You call this examples of either data mining or sample mining.
And the question, I want you to maybe explain a little bit what those phenomena are.
But what I'm really curious about is, what does that say about the need for integrity among researchers, among journalists, and even among public officials who often cite that kind of research?
- And I equally could spend a week replying to that question, which is a very rich one and something which is quite dear to my heart.
So what is data mining?
This is the idea that we start from a conclusion that we want to support, and we can almost always reverse engineer some data to get us there.
So the example that I give in my book is one of the link between diversity and performance.
And, again, I would love this to be true as an ethnic minority.
And so, we're trying to hunt for that result, which I was actually asked to do by a leading investor.
You could measure diversity in a huge number of ways.
So it might be, you could look at, well, the number of women on the board, the number of women in senior management.
Do you have a working mother award?
Do you have a low gender pay gap?
Do you have some paid parental leave?
And how long is this?
And so on.
So there's so many measures of diversity we could look at, and there's also lots of measures of performance we can look at.
Is it sales growth?
Is it profit growth?
Is it total shareholder return with dividends, or without dividends?
Do you look at one year or two year, or five year performance?
And if I was to run lots and lots of tests, then even if there was no true link between diversity and performance, I might just get lucky.
Just through randomness, one or two of these may well be significant.
And so, if I wanted to gain attention, I would just trumpet those one or two results, bury away all the other ones that didn't work.
And then try to present an unequivocal result.
And this is what people have done, and I've given examples of that in the book.
So what does this mean in terms of integrity as a researcher?
Is not to do something like data mining.
And this is far easier said than done because we live in a world in which to be published in top academic journals, you often do need significant results.
And if I wanted to hit the headlines of a newspaper or to give a TED Talk, claiming that diversity improves performance is gonna get me there much faster than claiming that there is no link.
But I think even if I was to try to appeal to a researcher's self-interest, rather than public spirit, is to highlight that actually, data mining is often found out.
So what can people do?
Is they can check for robustness.
They can do follow-up studies, measuring diversity and performance in different ways.
And finding that this goes away.
And this is indeed what has happened with McKinsey with their diversity research.
Yes, they may have had a short-term reputational boost for being seen as pioneers for diversity.
But since then, the fragility of their results has been exposed and I think the reputational hit has actually exceeded the short-term boost that they got.
- Alex Edmans, we could talk to you for another week about this stuff.
But the book is "May Contain Lies," and it is brilliant.
Thank you so much for spending some time with us this week.
That is all the time we have this week.
But if you wanna know more about the show, you can find us at salve.edu/pell-center, where you can always catch up on previous episodes.
He's Wayne, I'm Jim Ludes, asking you to join us again next time for more "Story in the Public Square."
(pleasant music) (pleasant music continues) (pleasant music continues) (cheerful music)

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Story in the Public Square is a local public television program presented by Ocean State Media