The Chavis Chronicles
John Pasmore & Todd McDonald
Season 6 Episode 620 | 25m 52sVideo has Closed Captions
John Pasmore & Todd McDonald discuss AI innovation and banking solutions with Dr. Chavis.
Dr. Chavis interviews two leaders shaping America’s financial and technological future. John Pasmore, CEO of Latimer AI, explores how artificial intelligence can drive equity, innovation, and new opportunities. Todd McDonald, CEO of Liberty Bank, discusses expanding economic access, strengthening communities, and advancing inclusive banking solutions.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
The Chavis Chronicles is presented by your local public television station.
Distributed nationally by American Public Television
The Chavis Chronicles
John Pasmore & Todd McDonald
Season 6 Episode 620 | 25m 52sVideo has Closed Captions
Dr. Chavis interviews two leaders shaping America’s financial and technological future. John Pasmore, CEO of Latimer AI, explores how artificial intelligence can drive equity, innovation, and new opportunities. Todd McDonald, CEO of Liberty Bank, discusses expanding economic access, strengthening communities, and advancing inclusive banking solutions.
Problems playing video? | Closed Captioning Feedback
How to Watch The Chavis Chronicles
The Chavis Chronicles is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, LG TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> I'm Dr.
Benjamin F. Chavis Jr., and this is "The Chavis Chronicles."
>> As an optimist, that's what I look at.
Like, use this as a -- you know, as an extension.
You know, maybe it would have taken you six months to code something up, and maybe you wouldn't have spent that six months, but now it might take a matter of days with an AI assistant to do that.
So, what new innovations will that lead to?
>> Major funding for "The Chavis Chronicles" is provided by the following.
At Wells Fargo, we continue to look for ways to empower our customers.
We seek broad impact in our communities, and we're proud of the role we play for our customers and the US economy.
As a company, we are focused on supporting our customers and communities through housing access, small-business growth, financial health, and other community needs.
Together, we want to make a tangible difference in people's lives.
Wells Fargo -- the bank of doing.
American Petroleum Institute -- our members are committed to accelerating safety, environmental, and sustainability progress throughout the natural gas and oil industry.
Learn more -- api.org/apienergyexcellence.
Reynolds American -- dedicated to building a better tomorrow for our employees and communities.
Reynolds stands against discrimination in all forms and is committed to building a more diverse and inclusive workplace.
♪♪ >> We are most honored to have one of our nation's leading computer scientists, founder and president of Latimer AI.
Welcome, John.
>> Thank you.
Thank you for having me.
Appreciate it.
>> You started in the communications, How did you migrate to computer science?
>> I've always had an interest in technology.
And you're right.
You know, I spent years in media, and obviously, in creating media, you're using technology.
And, you know, we knew each other in New York City.
And I was always going to school.
I have a degree in business, but I had gone back to Columbia University part time to get a computer science degree.
So, that effort, that computer science degree, actually turned out to be well timed -- you know, went back as an adult, took probably 10 years of classes.
But in this AI revolution -- you know, my degree was not that far before that -- kind of prepared me to really understand what the technology was and what the importance was.
I kind of knew that what OpenAI had delivered was really at least a decade -- advancing the timetable by a decade, meaning that we were seeing things, seeing capabilities that we didn't think we'd see for quite a while.
So I knew it was really important.
And obviously having a computer science degree was important for Latimer.
>> They say timing is very important.
You were at the right place at the right time.
>> I would have to agree with that in this case, yeah.
And not only that -- we were able to to quickly form a business.
You know, one of the things that we saw early on with OpenAI -- we were all impressed with the technology, but then we saw that it was demonstrating bias and inaccuracies, particularly around Black and brown people, around diverse populations.
And from my perspective, I looked at that as kind of a technical issue.
It's really in the training data and how these models are informed.
So, we quickly stood up Latimer AI.
We launched in January of '24 with a product that we've managed to evolve fairly quickly that's essentially addressing the bias in AI.
>> What is the mission of Latimer AI?
>> To mitigate the bias and increase fairness, really, in artificial intelligence.
Specifically, the issue that we're working on is how Black and brown people are represented in AI and what the cultural fluency is of these machines.
And as you can imagine, when you start to look at Black and brown people in Europe or Africa or different places in the world, there's different information that's required.
So we're focused on the story, the history of Black and brown people here in the United States as a first step.
>> In the communities which we work and serve, there's sometimes fear of technology, fear of change, fear of innovation.
As an entrepreneur who just started this great company, obviously you didn't have a phobia about innovation.
>> Right.
>> Can you tell us the importance of having a perspective that's always forward-leaning than a perspective that's kind of anchored only to the past?
>> Yeah.
I mean, I think there's a little dichotomy there -- and especially the African-American community that's also looked at as being early adopters.
So, whether it was, you know, games or mobile phones, you know, we see these big companies that are always looking to the Black community to be the early adopters.
So, there's a part of our community that is fairly forward-looking.
And that's -- You know, we need that.
We need that because that becomes kind of the entry point for some of these technologies into the broader community.
And with AI in particular, you know, this -- You know, from my perspective, I look at AI as "could change everything," you know?
It could change education.
It could change a lot in the job market.
So it's something that we really have to pay attention to.
could change for the better?
>> We don't know.
You know, the funny thing is, with what OpenAI did, is they released a product that wasn't fully baked or fully tested.
But it created an AI arms race because all these other large companies -- Amazon, Microsoft, Google -- had been working on the same technology for years.
But once OpenAI released it to the public, then everybody's kind of forced to jump in and release their product to the public.
And we saw the mess, and we still see it.
Elon Musk's AI entry, Grok, saying something that's, I would say, fairly anti-Semitic.
So, these models can still make mistakes, so to speak, again, based on their training data.
We are, you know, testing these and helping these companies build these machines.
And we don't know the impact on jobs.
We see, you know, various prognostications about, "Hey, this is going to have a negative impact, or it's gonna create new jobs.
We actually don't know."
But we do know that especially for young people, they need to be fluent in AI.
>> What would be your recommendation to public school systems around the country?
What should be the entry point in the curriculum for computer science, for innovation and data analysis?
>> Yeah, I mean, with these new tools, right, there's a big fear about diminishing the critical-thinking ability of whoever's using it because you can simply ask a question.
You don't have to have technical knowledge.
You just ask a question in words, and it can give you an answer, meaning you're not necessarily doing the work.
And how does not doing the work affect your true understanding?
So I think we do need to be a little bit careful.
We were talking a little bit earlier about how, in computer science, what you're learning to really do is problem-solve.
And I think that's the skill that we really want to have young people retain, that capability of how to approach a problem, how to frame it, and how to figure out a solution for that problem, and in a step-by-step manner.
So, to the degree of how young, I think coding itself, maybe -- maybe -- I don't know.
You know, we're seeing tools already that can do -- can write a lot of code.
will change over the next several years.
You may not need to learn exactly how to code, as it was maybe even three or four years ago.
But it's still a good practice because what you're learning when you learn to code is, really, you're learning logic and you're learning -- you're kind of understanding, "Well, how does -- how am I giving the instructions to the machine, a computer, to solve a problem in a step-by-step manner?"
And I think that that's always gonna be a great skill.
>> And it seems to me the mission of Latimer is to make sure that the playing ground is level around machine learning, artificial intelligence.
>> Yeah, I mean, on the home page of Latimer, we say "AI for everyone" because we think that everybody wants AI that is trained on the entire history of everybody that's actually built this country and, you know, the civilization as we know it.
And that's how we'd like to -- that's how we'd like to see AI adopted, really, with everyone participating.
>> Do you see AI being a place where people from different cultures, different racial backgrounds, different socioeconomic backgrounds can approach AI in a way where they see the calmness or the oneness of our humanity and not just so much the diversity of our humanity?
>> It has that potential.
I was reading a Substack article by a VC recently.
He talked about, you know, the rush to AI versus, you know, what is your belief system?
And I thought that that was interesting in the sense that we -- We do have fundamental beliefs as a country.
Sometimes we stray from that.
To the degree that AI can surface those beliefs, you know, what we have right now is a country, you know, where we've been somewhat divided.
Right now we have so much noise, and I think it's very hard, even in social media, to determine what's really true, or even in traditional media now, which has been disrupted.
It's become, you know, factions.
So maybe AI can kind of bring us back to what the common denominator is and what we're really trying to build as a country, hopefully.
>> When you use AI, how do you construct the question?
>> It depends on what you're trying to get to.
Generally speaking, though, the longer your question, the better your response.
You can frame AI -- You can tell it, "Well, speak to me like I'm a 5-year-old."
If you don't know what the -- have domain knowledge in an area -- "Hey, I'm gonna ask a question about quantum physics, and speak to me like I'm a 5-year-old."
So it's gonna kind of break it down into very small pieces so it can bring you along that journey to educate you versus just saying, "Hey, I want the answer to this question," as if you're doing homework.
It'll just give you the answer, but you won't understand necessarily how it got that answer.
>> Well, if you were a parent of some of these children, what would you instruct?
Would you just let everything go?
Or should there be some structure on how computer science, how data, how AI is used in the home?
>> Yeah, we're going to need a new level of oversight in some degree from parents in the responsible use of AI because, you know, it's -- Sam Altman, the founder and CEO of OpenAI, ChatGPT, likes to say, "Hey, my AI, ChatGPT, has read the entire Internet and remembers it."
So, that's actually true, that AI can read the entire internet, everything that it can see, and remember it.
So when you're speaking to it, it has that basis to respond.
But we don't only want our kids to understand or just get a quick answer because if you're not doing the work, you might not understand the topic.
You might not understand even how to get that same answer without the machine.
So, you know, we do need parents to keep an eye on young people and make sure -- especially if they're doing their homework.
You have this -- We're in this period where, you know, AI is racing ahead, but maybe the school system that their children are in hasn't figured out what to do with AI.
And even in higher ed and even in colleges, we have colleges where, you know, the professor can go all, some, or none, no AI used in their class.
It's up to them.
A lot of educational institutions haven't figured out, "What are we gonna do with this?"
because it gives young people the ability to do their homework in seconds, but maybe they're not really learning.
>> In the panoply of AI companies, do you have of the world or the people that own Facebook, Instagram?
>> I would say the most conversations early on with Google and -- You know, we'd like to continue to have conversations with those companies.
And we certainly use them, you know, the underpinnings of our technology.
We have a database.
Latimer is actually its own database.
We created something that's called a RAG, meaning that our database sits on top of other foundation models.
So if you go to Latimer AI, you have a choice of 10 different models that you can use.
And they're all using our database, Latimer, which, again, has more information on Black and brown history and culture.
So, whether you want Latimer plus OpenAI or Latimer plus Google or Latimer plus -- you know, Anthropic is another big company in this space -- you have all of those choices.
>> Even though we have rapid growth, it's not closed.
It's not a closed industry.
>> No, I mean, most people would assume that we're kind of in the first inning.
So this is like the very beginnings of the Internet, so to speak.
So, you know -- >> Really?
>> You know, there's a ton... >> 'Cause most people I talk to think we're in the ninth inning.
>> Yeah.
>> You said we're in inning one.
>> Yeah.
>> That's an interesting perspective.
>> Yeah, I mean OpenAI really came out in 2023, you know, with their first big public launch.
So I would say that this is we're very early on.
There's a bunch of new technologies that are still on the horizon, whether it's quantum computing.
So -- And, you know, we see this in this most recent release from Google.
They've migrated from Nvidia to using their own chips.
So there's still a lot of moving parts.
And, you know, ultimately, a big part of the user base of this is going to be young people.
And I think that products can be built from that perspective, that maybe young people want to see a different AI than Google wants to present.
>> When I was a kid, I used to read Scientific American.
>> Yes.
>> What books were most impactful to you to go into computer science?
>> My dad had a radio repair shop way back in the day.
And, you know, so it was always in the house.
There was an oscilloscope.
There was all this stuff that -- I don't know -- just made me kind of curious about technology.
So, I think that was the biggest thing for me.
And then even in the media business, you know, we came into the media business at a time when you could create -- You know, what we see now in video, you saw in desktop publishing, where you had traditional media that took a lot of work, let's say, to make a newspaper or magazine, but suddenly you can do it on a personal computer.
So, whether it's vibe or source or -- we had one world -- you could create that piece of media in an office as opposed to in a factory.
And so I think we're seeing the same thing now, where people are looking at video or image creation or text creation.
"Hey, these are young machines.
This is a young technology."
And obviously there's a tremendous amount of innovation in our community, so I think that we'll find uses of this technology in new ways and hopefully the ability even -- Now that you can use some of this technology to write code, that's no longer a barrier.
So, whatever the application, whatever you think that your phone or your computer should be able to do, you can create that platform, that application now.
>> You mentioned that your focus at this stage of your company is the United States.
Is the United States competitive on the global stage?
Is China ahead of the United States in AI?
Is Russia ahead of the United States in AI?
Or is the United States ahead of those two?
>> It's hard to say.
From a consumer standpoint, it does seem like we're on the leading edge.
China has an AI platform called DeepSeek, and others, DeepSeek being the most well known, that seems very, very advanced.
Obviously, we don't know -- we have no visibility into how much state funding they receive to create what they've built.
But in terms of functionality, very advanced.
There's a whole -- You know, we're talking generally about consumer AI, but then you have, you know, the Palantirs and the Peter Thiels of the world that are creating AI for the Defense Department.
And so, you know, I think what we've seen in maybe the Ukraine or even in Israel is that all of this AI technology has another application.
And in some ways, we're kind of beginning to see what the defense applications are.
And that's, I think, where we're competing perhaps with China, where our current administration doesn't want to see any roadblocks for our domestic creation of AI because they frame it in that sense.
Like, you're Russia or China or India or Nigeria, you have a history and culture that is maybe not reflected in Gemini, in Google's Gemini.
So they're creating their own, which makes sense.
But, yeah, there's somewhat of an arms race in terms of human capital, as well as the amount of money that it takes to build some of these foundation models.
>> There are over 120 HBCUs, historically Black colleges and universities, in America.
Tell us -- have you had a chance to intersect with any of these HBCUs on the mission of Latimer AI?
>> Yeah, well, we've had tremendous success with HBCUs.
We have a very talented PhD student working with us now from Morgan State who's doing tremendous work for us.
And all I see is talent.
You know, we've done a trial at Morehouse.
We've deployed to to Bennett and Tennessee State University.
Miles College in Alabama was the first university in the United States to license Latimer, and we appreciate that.
And, you know, there's just a ton of talent there that we hope to be able to benefit from.
But hopefully the other tech companies -- You know, we can't, obviously, employ everybody, so hopefully other tech companies see that same thing, that there's a tremendous -- >> If a college licensed Latimer AI, how does that bring benefit to that school?
>> Well, it gives them some -- you know, kind of a uniform playing ground to some degree.
And different colleges work differently, but we supply some analytics so you can at least see how your students are using this tool.
If you have to do your homework using this tool, you can understand, "Well, you know, did the student do their homework five minutes before class, or did they start their assignment a week -- when it was assigned?"
-- number one.
And with universities like Southern New Hampshire University, they're just integrating this into their computer science classes, where they have a robot car that's using our model to take voice prompts and translate that into the direction of the car.
And we're also having a conversation at the same university with their nursing students.
So, we can add additional materials, training materials, to make Latimer smart for a specific domain.
>> Do you envision that a time will come when you have to be certified in AI?
>> I think that that's probably a pretty good idea that you have some sort of base understanding of -- >> Standards, standards.
>> Yeah, of usage and, you know, what would constitute misuse of AI.
I think that certainly makes sense.
You know, we're getting -- You know, we can begin to see on the horizon where some of these platforms could be autonomous, meaning they can do things.
And that's -- You know, the stated goal, the stated mission of now Meta and OpenAI is to create something called AGI, which is artificial general intelligence, meaning that pretty much those machines can emulate human thinking to the degree that they're autonomous.
They can solve -- They can recognize and solve their own problems.
>> John Pasmore, given your significant career in computer science and technology, today what gives you your greatest hope?
>> I think I'm just an optimist by nature.
And I just see -- You know, I've spent so much time over the last year, really, on different college campuses from, you know, Miles in Alabama and down in Texas and North Carolina, that you just see so much intelligence and innovation with young people that I think that they're going to take this technology and run in new ways.
We work with an artist, Delphine Diallo, who looks at AI as extending her capabilities.
She can do things now as an artist that she never thought or might have taken a year in the past to do.
So, I think, you know, as an optimist, that's what I look at.
Like, use this as a -- you know, as an extension.
You know, maybe it would have taken you six months to code something up, and maybe you wouldn't have spent that six months, but now it might take a matter of days with an AI assistant to do that.
So, what new innovations will that lead to?
>> John Pasmore, the founder and CEO of Latimer AI, thank you so much for joining >> Thank you for having me.
Appreciate it.
>> For more information about "The Chavis Chronicles" and our guests, visit our website at TheChavisChronicles.com.
Also, follow us on Facebook, X, LinkedIn, YouTube, Instagram, and TikTok.
Major funding for "The Chavis Chronicles" is provided by the following.
At Wells Fargo, we continue to look for ways to empower our customers.
We seek broad impact in our communities, and we're proud of the role we play for our customers and the U.S.
economy.
As a company, we are focused on supporting our customers and communities through housing access, small-business growth, financial health, and other community needs.
Together, we want to make a tangible difference in people's lives.
Wells Fargo -- the bank of doing.
American Petroleum Institute -- our members are committed to accelerating safety, environmental, and sustainability progress throughout the natural gas and oil industry.
Learn more -- api.org/apienergyexcellence.
Reynolds American -- dedicated to building a better tomorrow for our employees and communities.
Reynolds stands against discrimination in all forms a more diverse and inclusive workplace.
♪♪ ♪♪ ♪♪ ♪♪ ♪♪

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

Today's top journalists discuss Washington's current political events and public affairs.












Support for PBS provided by:
The Chavis Chronicles is presented by your local public television station.
Distributed nationally by American Public Television