
Story in the Public Square 7/27/2025
Season 18 Episode 4 | 27m 30sVideo has Closed Captions
On Story in the Public Square, analyzing the impact of artificial intelligence on theology.
The history of humanity is the history of individuals making decisions collectively and individually. Artificial intelligence, AI, brings a new player into the mix: machines capable of making decisions alongside or instead of humans. Authors Sean O’Callaghan and Paul Hoffman grapple with the theological implications of the new technology.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Story in the Public Square is a local public television program presented by Ocean State Media

Story in the Public Square 7/27/2025
Season 18 Episode 4 | 27m 30sVideo has Closed Captions
The history of humanity is the history of individuals making decisions collectively and individually. Artificial intelligence, AI, brings a new player into the mix: machines capable of making decisions alongside or instead of humans. Authors Sean O’Callaghan and Paul Hoffman grapple with the theological implications of the new technology.
Problems playing video? | Closed Captioning Feedback
How to Watch Story in the Public Square
Story in the Public Square is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship- The history of humanity is the history of individuals making decisions, sometimes collectively, sometimes individually.
Now, artificial intelligence, AI, brings a new player into the mix, machines capable of making decisions alongside or instead of their human counterparts.
Today's guests grapple with the theological implications of this new technology.
There's Sean O'Callaghan and Paul A. Hoffman this week on "Story in the Public Square."
(bright music) Hello, and welcome to "Story in the Public Square," where storytelling meets public affairs.
I'm Jim Ludes from The Pell Center at Salve Regina University.
- And I'm G. Wayne Miller, also with Salve's Pell Center.
- And we're joined today by two great thinkers.
Sean O'Callaghan is an associate professor of religious and theological studies at Salve Regina University.
And Paul A. Hoffman is an associate professor of biblical and religious studies at Samford University.
Together, they are co-authors of "AI Shepherds and Electric Sheep: "Leading and Teaching "in the Age of Artificial Intelligence."
Gentlemen, thank you so much for joining us today.
- Thanks, Jim; thanks, Wayne.
- So the book is, you're covering a lot of ground here.
We've got a lot of artificial intelligence, we've got a lot of theology, but we've got a lot of questions about human existence and what it means to be human in this technological age.
For you, Paul, starting, how did you come to this particular project?
- That's a great question, Jim.
It all started, actually, a couple miles from where we're recording this, in a restaurant in Providence, a couple weeks after ChatGPT had been released upon us.
I was having a meal with a pastor buddy of mine.
He lives in Massachusetts.
And he pulled out his iPhone and he said, "I wanna show you this.
"Have you used ChatGPT before?"
I'd heard of it, but I'd not used it.
And I forget what voice prompt he put in there, but he, you know, asked it to write an essay on these two topics.
80, 90 seconds, it spit out a feasible essay.
Blown away, couldn't believe it.
And so that left me reeling and struggling and wrestling.
Number of months later, I ended up running into Sean, not far from that chicken joint restaurant in Providence.
He and I ran into an event together that, unbeknownst to one another, we were coming out of COVID.
And Sean and I met at this event.
And if I recall correctly, I relayed to him what had happened at this restaurant.
And I've known Sean for many years.
He's an expert in transhumanism and the intersection of humans and machines.
And I told him about what happened just a couple miles west of us.
And I said, "Hey, would you ever wanna "write a book about this?
"What do you think?"
And that started us down this crazy road.
- Well, so maybe a good place to start then, Sean, would be just a... We've had other guests on to talk about artificial intelligence, but just a quick little refresher for the audience.
What are we talking about when we're talking about artificial intelligence today?
- There are so many definitions.
A lot of them are very technical.
I really like a definition that comes out of a book by Kissinger, Daniel Schmidt and Daniel, sorry, Eric Schmidt and Daniel Huttenlocher.
And it's called "The Age of AI."
And it basically says that AI is a new way, a new and powerful way, of organizing reality.
So basically, it's a new way of looking at the world.
It's very powerful technology, which shapes our whole world, but also shapes how we look at it.
So there's a lot of technical definitions, but that's my favorite one, because it talks about society.
- But we're talking about machines, algorithms being used to produce something that almost mimics human intelligence.
Is that fair?
Yeah, that's exactly what's happening, yeah.
So it's mimicking human intelligence, but it's lacking that kind of understanding.
It's lacking the nuances that we have as human beings, a kind of, for want of a better term, the sixth sense we have when we go into any situation, we know what's going on in the room, we can read a room and all that.
AI isn't really able to do that.
So it mimics human behavior, I would say, mimics human intelligence, but it lacks a lot of what human beings actually have.
- So you both were talking about when you first heard about it.
It has evolved since then and continues to rapidly evolve.
Is that correct?
- Yeah, it, well, ChatGPT, I suppose, was, and generative AI was the area, which over the last two years has evolved very, very rapidly.
And that's kind of, that's changing, again, how we see the world.
It changes how people read, it changes how people write.
I know in education, we're very concerned about ChatGPT in plagiarism and so on.
But there's much more at stake, I think, because as human beings, from millennia, we've been writing and we've got our thoughts down.
We've deliberatively thought through issues and got them down.
And now we're relying on a machine to do it for us.
So I think we lose a lot.
- Well, the thing that I've always worried about with this is that, for me personally, writing is thinking.
- Yeah.
- And so if we surrender that piece of it, are we surrendering thinking?
- I think we are, yeah, I think we are, because we, as I say, it's deliberative.
You know, when you are writing, you're thinking about things, you're thinking about a wide range of issues.
You're choosing the best word, but you're also choosing the best atmosphere and kind of thinking about all of that.
And you're bringing in a lot of different issues, and now you're handling it over to a machine when we've been doing it really well for thousands of years.
- So, why do people want to hand it over to a machine?
- (chuckles) That's the key question, isn't it?
- It is a key question, yeah.
- Yeah.
It's an intelligent machine, I think, is what John McCarthy called them, intelligent machines.
Sean and I were actually talking about this earlier today, that we live in a society now that's very instrumentalist, that's very, we believe in efficiency.
We could almost say that efficiency, this whole, you know, "Let's make our lives as simple as possible."
So we're increasingly... And this has been a trend prior to the releasing of AI on the public, because AI has been developed over decades.
This didn't just, you know, poof, appear out of nowhere.
But for a long time now, we, as humans, are relying more and more on technology to give us easier lives, smoother lives, lives that we would say have less struggle, have less pain.
And in that sense, you know, Sean and I would agree that yes, we want AI to come up with better cancer diagnosis and better ways of diagnosing and curing and treating cancer.
Yes, we want AI to help with diagnostic imaging.
AI is being used to help organize various, you know, huge files.
And organizing them, and then, you know, looking for anomalies or differences among the files.
Totally in agreement with that.
But what we're concerned about, and we talk about in the book, and Sean's already addressed this, is that we don't want it to replace us.
Because part of what it means to be human is to think and to create and to wrestle and to iterate and to explore.
Part of the world is exploring and learning by iterating, by trying.
And this is, to the point of writing, my wife's an English teacher.
This has been a nightmare for my wife, because my wife is trying to teach students the beauty of "The Great Gatsby," "A Raisin in the Sun," you know, books like this, that they'll actually read the books, think about the meaning of the books, and then work out their thoughts on paper through writing an essay.
And writing an essay is not just about articulation, although that's very important.
It's also about perseverance, learning to write and then edit, and then rewrite, and then get better.
And the more that we write and rewrite, if we're doing it well, we're sharpening our thinking.
We're building these muscles of perseverance, that I'm not just gonna give up and quit, but no, I need to, this paragraph doesn't fit with this paragraph.
And so to write and to create is part of what it means to be human.
And we don't want that to be lost because we wanna defer everything over to a machine that will just do it for us.
- So you both are theologians, so Paul, let's stay with you on this.
Where does this move into a theological set of questions?
- I would say from the outset, AI is challenging what it means to be human.
And so one of the central questions we ask in the book, I would call it the research question, is this, how might AI help or hinder human flourishing?
How might this powerful technology, that does mimic human intelligence, how might it help us grow and flourish as humans on one hand, but on the other hand, how might it hinder our human flourishing?
So for example, when my wife's students use ChatGPT or Claude or some other program, Bard, whatever the case is, to write an essay for them, that's actually hindering their human flourishing.
Because if you're in 9th grade, part of your development is to learn how to think critically.
And to not just think critically, but to articulate it through a convention of English writing and prose, having to put your thoughts down on paper.
If you just use a computer to do that for you, you're actually short-circuiting your human development.
You're not developing those critical thinking skills and muscles.
And so in that sense, you're not gonna be fully formed, you're not gonna be fully developed if you try and skip a whole section of your development as a human being.
- The temptation, though... Let's take that 9th grader.
The temptation to use it, though, must be great, because you can-- - And very human.
- And very human.
(group chuckles) Let's face it, you get the paper written quickly.
And then you have time to, I don't know, go on TikTok or play sports or whatever you do, right?
Isn't there a temptation here?
- Yeah, because-- - Which is a very human quality, of course.
- Because I think that the issue is like, are we telling kids to eat their spinach, right?
So my my wife would, is a math educator, at least was earlier in her career, and she tells the story, you know, when she was graduating from high school in 1990, they were not allowed to use calculators on their AP calculus exams.
But four years later, when she was teaching AP calculus, it was an expectation.
And so there's a transformation there.
Are we gonna have that same kind of transformation?
What's at risk in replacing human...
I guess, let me phrase it this way, can't we use artificial intelligence to enhance human flourishing rather than necessarily detract from it?
- I think we can, but I think we have to do it selectively.
So we have to teach people how to use AI.
It just has to be done.
Because it is the future, you know, to form a cliche, it definitely is.
And now we have major companies, finance companies, who are developing their own ChatGPTs to organize the information that they have.
So our students going into these companies have to know how to be able to use it.
But they have to do that in a very selective way, I think.
And I think if they use it wholesale without understanding what it's taking away from them, then I think we're in danger.
And I think we have to find a way to get across not only the students, not only the young people, but to anybody using AI, that you have to really think very, very critically about it.
So we have to have that conversation in society, I think.
- Well, so let me get very meta about this.
I was on a panel recently where we were talking about the AI in society.
And, you know, I would observe that we live in a time in the United States now where the Supreme Court has said that corporations have the same free speech rights as individuals, and that the expenditure of money is the same as speech.
And so then the question that I passed into this group was, so is there a scenario in some future where a court is going to rule that an algorithm, an artificial intelligence, has free speech because it's owned by a corporation?
Now, we don't have to grapple with that now, but the questions that leap to my mind after reading "AI Shepherds and Electric Sheep" are things like, can AI be sentient?
Does AI have consciousness?
And can AI have a soul?
- Yeah.
- I think we'd agree no.
- No, no.
And there's a huge amount of discretion around that, because people are saying, yeah, AI hasn't got it now, but in the future, this could be something, I was reading something yesterday where it talked about AI being kind of like a godlike being almost, in charge of everything, and then the internet is like its soul.
And all the sensors we have are like, it's sensing, you know, its hands and its feet and all of that.
So almost like AI could evolve into something bigger, something that could even be called a godlike being, like an all-powerful technology.
So there are people thinking this is the way it'll go.
It's not there now.
I don't see it ever going in that direction.
But it's a question really being discussed a lot.
- The question of whether or not they could have a soul, though.
this is something that's fundamental to at least Christian theology, right?
- Yes, that's right.
In the book, we talk about this idea of being made in the image of God.
That's Genesis 1 and 2, which is in the Judeo-Christian tradition, that only human beings, as I understand it, are made in God's image, that have the ability to have reason and rationality, and the sense of thinking through, but not just thinking through A plus B equals C, but also picking up cues, visual cues, having emotional intelligence.
To be made in the image of God is that we are thinking, sensing, perceiving beings that also have hearts, that have souls, that think of beauty, that understand that everything is not down to rationality or numbers, right?
Part of the challenge is we live in what's been called a technocratic society, an instrumentalist society, where everything is about efficiency, everything's about the bottom line, right?
Did your corporation make more this year than last year?
Well, if we start going in that direction, then what about the humans that can't produce at the same rate?
Are they not worthy of value and meaning if they don't produce at the same rate?
And so I don't think we want to get to a place as a society where everything comes about the bottom line.
Because to me, you know, we've seen other societies go in this direction.
I go back and I think of, I think of Nazi culture where they started getting rid of people that they perceived were not useful.
- Eugenics, yeah.
- Yeah, eugenics.
I mean, there's a long history.
It's not just the Nazis or eugenics, but it starts to go to this idea of efficiency and instrumentality.
"What can you produce?
"Okay, well, you can't produce anymore.
"We have to end your life.
"We have to send you off on a," you know, "in a box in the sea."
You know, "You're taking up space."
And that's the danger.
If we take the philosophy that's underpinning much of AI, and we extrapolate that out, that's where we're headed.
If we don't have the curb of a rich, I would argue, and I think Sean would, a robust theological vision, a robust theological understanding of the world.
You know, Charles Taylor called it the transcendent frame, this idea that there are values that come to us from another place in another space in another time.
Otherwise, we're gonna end up in a Darwinian world where the strong eat the poor and the rich destroy the weak.
And we don't wanna get to that place.
And that's if we're not careful how we think about AI and we just say, "Cool, can it make my life easier?
"Cool.
"Can it produce more income?
"Cool.
"Can it make my, can it do this and this and this "and this for me?
Forgetting that to be human is to struggle.
To be human is to create works of beauty.
To be human is to iterate and to try and to fail and to try again.
And so we don't wanna delegate, we don't wanna forfeit what it means to be human to intelligent machines that are mimicking what it means to be human, but are fundamentally not human.
- So part of being human is interpersonal relationships.
whether those are friendships or they're, you know, spouses or partners, you know, child, mother, father, whatever.
What does AI portend for interpersonal relationships?
I think that's a huge question.
- It is huge.
And there are a lot of programs out there, chat bots and programs that allow you to build avatars that look like the kind of people you wanna have a relationship with.
And people are having relationships with these entities.
I think part of it is there's an epidemic of loneliness in the world right now.
It's huge.
- Yeah, absolutely.
We've talked about that often on the show, yeah.
- Yeah, and I think it's leading a lot of people to look to these bots, look to these avatars, to form relationships with them.
But they're like, they have to be empty relationships because you've created this being.
This being is almost like a mirror of yourself, so there can be no relationality involved in that.
- Isn't part of being a human being and being fully human being, having to navigate complex real relationships?
If you have a chat bot that is engineered to your peculiar interests, tastes, attitudes, how does that set you up to actually operate in the real world where you have to engage with real people every day?
- Yeah, I mean, it's been proven by psychologists, sociologists, that we are social beings.
That when a child is born, they need to be held, they need to be touched.
That's where they learn to mirror the emotions and to be fully developed, right?
They've done the studies.
If you're raised in your first five years without normal human interaction, you end up with an attachment disorder.
You're not able to attach to other people in normal human relationships.
If you're not used to making eye contact and being held and hugged and loved, if you don't learn how to interact with other people, it profoundly impacts your development.
And I'm not a social scientist or a psychologist, but this is study after study.
The one thing I would encourage your viewers to look up is something called the ELIZA effect.
The ELIZA effect.
And we talk about this in the book, that one of the concerns with AI is that we, as human beings, are attributing to machines, to bits of code, okay, to data points that are moving in between networks, zeros and ones moving in between networks, we're in danger of anthropomorphizing, or attributing to machines, human-like qualities.
And so we're gonna, we're starting to treat machines like they're humans, but they're not humans.
And what we forget is that all interaction is dialogical.
It's a give and take, it goes back and forth.
And so I'm not just shaping machines, machines are shaping me.
So if I start interacting with a machine casually, I might not learn how to have manners.
I might not learn to say, "Wait a second, foul.
"That's an inappropriate word, or, "That's offensive to me," or, "I don't agree with that."
We just start to fall into bad habits.
And if you're interacting with a machine, that's not like me interacting with the two of you.
It's just not the same.
It can't be the same.
And this is gonna be a problem going forward because there are concerns that, for example, andragogy, or education, that in the future, and Sean talks about this in one of his chapters, that students are gonna be raised by chat bots.
That, why do you need a teacher when I can have an all-knowing computer teach me everything I need?
I was just telling him in earlier today that I know of, of one of my best friend's daughter, her friend, is learning about religion, about what we call the Old Testament or the First Testament, by talking to ChatGPT.
She'd be better served talking to me or Sean, trained theologians that have PhDs in the study of religion, than talking to a machine that's just pulling bits of code out of the ether.
And by the way, machines hallucinate.
They make things up.
They make things up.
- Well, so Paul, you've been a minister.
You are a minister.
- Yes, I still am.
- And you have run a church.
- [Paul ] Yep, 18 1/2 years.
- Are there appropriate uses for artificial intelligence?
You talk about this in the book, so I know you've kinda answered this.
- [Paul] Thank you.
(group chuckles) - What are the appropriate uses for artificial intelligence in ministry, and what are the things that ministers would be wise to say, "Don't go there?"
- Yeah, thanks for asking that.
Sean alluded to this earlier.
We advocate for something called selective engagement.
That is to say that there's some things it may be wise to consider using a machine for, but there are other things we would consider, we would argue, as unwise.
So in our final chapter before the conclusion, chapter seven, speaking to those engaged in ministry, we make recommendations.
We have go sign recommendations, we have slow sign recommendations, and we have stop sign recommendations.
So for example, one of the stop sign recommendations is do not have an AI chat bot, if you're a minister, write your sermon or Bible study.
- Why?
Why not?
- Because you, as the minister or rabbi or religiously trained person, you are uniquely trained to do this.
But not only that, we believe that God calls people to enter a certain vocation, a certain vocation.
And you have a sense of accumulated wisdom, through experience, through loss, through disappointment, through failure, that cannot be replicated by a machine.
Not only that, we believe in scripture teaches that people have what were called spiritual gifts, that the Holy Spirit endows people with certain gifts.
One of those gifts is discernment.
That I can listen to you, and sometimes God will say something to me, this is in my faith tradition, that God will give me knowledge that I might not have had outside of this.
But the spirit of God will give me insight into your life that maybe was not disclosed to me prior, but in listening to you, discerning from you, I pick up on a pattern or something, and then I'm able to address that with you.
I do not believe machines have spiritual gifts.
I don't believe they have lived experience in the world.
And so when I get up there and preach, when I get up there and speak on Sundays, I'm a human being made in God's image and likeness, speaking to other human beings made in God's image and likeness.
And I'm not just, you know, preaching is not just data transfer.
It's really a sacred conversation that's occurring between me and God and me and the people that I have loved and given an oath, taken oaths, to care for, protect, and lead well.
And so what we call the homiletical moment, or the preaching moment, to me, is sacred because that involves this special union between me, God, and the congregation, and that God has called me to love them.
God has given me gifts with which to love them.
God has even given me discernment.
We're in the middle of a message.
I'm picking up on someone that's like, "Mm, I'm not buying it."
Or someone's like, "Huh, what is he talking about?"
And then you've gotta adjust and go off script a little bit and speak into that situation.
- And a machine can't do that.
- Not to my knowledge.
I don't think it ever...
It can mimic it, but it can't do it in a deep, significant way.
- Paul, Sean, we've got literally about 45 seconds left here, so I'm gonna ask you a massive question.
Pope Leo XIV has said that AI is an emerging and important issue.
Can you give us just a taste of what we can expect from the pope?
- Yeah, it's interesting.
So I was watching the announcement that, "Habemus papam, we have a pope."
the cardinal announcing the name of the pope, said "Leo XIV."
I knew immediately what that was all about, immediately.
'Cause I look back to Leo XIII, who dealt with the whole age of industry, and he had to deal, you know, he's this pope of Catholic social teaching, who's looking at the mechanics of the time, the mechanical world of the time, and trying to make sense of it.
And I knew that that would happen with Leo XIV.
So I think it's gonna be very interesting to see what he does, what he talks about.
The Catholic church, actually, is doing some really good work on AI.
They've got some good documents out.
It's flying under the radar a bit.
People aren't really noticing it.
It's not been talked about.
But it's excellent stuff.
So I'll be very interested to see what he comes out with.
- Well, we have just scratched the surface with this book.
- Yeah, we have, there's so much more we didn't get to.
- But gentlemen, this is a tremendous conversation.
"AI Shepherds and Electric Sheep."
Sean O'Callaghan, Paul Hoffman, thank you so much for being with us.
- Yeah, thanks.
- Thank you.
- That is all the time we have this week, but if you wanna know more about "Story in the Public Square," you can find us on social media or visit pellcenter.org, where you can always catch up on previous episodes.
For G. Wayne Miller, I'm Jim Ludes asking you to join us again next time for more "Story in the Public Square."
(bright music) (bright music continues) (cheerful music)

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Story in the Public Square is a local public television program presented by Ocean State Media