Carolina Business Review
September 27, 2024
Season 34 Episode 9 | 26m 46sVideo has Closed Captions
AI Special: With Igor Jablokov and Dr. Collin Lynch
AI Special: With Igor Jablokov of Pryon and Dr. Collin Lynch, Artificial Intelligence Academy, NC State
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Carolina Business Review is a local public television program presented by PBS Charlotte
Carolina Business Review
September 27, 2024
Season 34 Episode 9 | 26m 46sVideo has Closed Captions
AI Special: With Igor Jablokov of Pryon and Dr. Collin Lynch, Artificial Intelligence Academy, NC State
Problems playing video? | Closed Captioning Feedback
How to Watch Carolina Business Review
Carolina Business Review is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship(bright music) - [Announcer] This is "Carolina Business Review."
Major support provided by: Colonial Life, providing benefits to employees to help them protect their families, their finances, and their futures; High Point University, the premier life skills university focused on preparing students for the world as it is going to be; Sonoco, a global manufacturer of consumer and industrial packaging products and services with more than 300 operations in 35 countries.
- As we navigate this technological revolution called artificial intelligence, strategic frameworks are emerging to try to harness AI's potential while addressing critical concerns like, oh, data privacy, ethical governance, and of course workforce displacement.
I'm Chris William.
Welcome again to "Carolina Business Review," seen across the Carolinas for more than three decades.
In South Carolina, the establishment of a dedicated AI committee to proactively approach regulating AI's impact on the state has three principles, protect, promote, and pursue.
In North Carolina, witnessing a not-so-gradual increase in AI adoption among businesses for sure, with applications raging from marketing to data analytics, to strategic planning, to mitigating personal and potential job losses and fostering economic growth, et cetera, et cetera, we will try our best to discuss strategies and the implications for residences, residents, businesses, and of course, policymakers in the Carolinas is they adapt to the evolving landscape of AI.
Please stay with US.
(pensive music) - [Announcer] Major funding also by: Truliant Federal Credit Union, proudly serving the Carolinas since 1952 by focusing on what truly matters, our members' financial success.
Welcome to Brighter Banking; BlueCross BlueShield of South Carolina, an independent licensee of the BlueCross and BlueShield Association; and Martin Marietta, a leading provider of natural resource-based building materials, providing the foundation on which our communities improve and grow.
(upbeat music) On this edition of "Carolina Business Review," Igor Jablokov of Pryon and Dr. Collin Lynch from the Artificial Intelligence Academy at NC State University.
- Hello, welcome again to our program, and thanks for joining us.
It's important to note at the top of the dialogue here that we are literally producing the show about two hours after Hurricane Helene rolled through and continues to roll the Carolinas.
So it's a technology challenge for sure, but we're glad to be bringing you this dialogue around AI remotely.
And joining us now, both Igor and Collin.
Gentlemen, welcome to the program.
I wanna start with something.
Igor, right before we started, came on the year, you related a story about your legacy with Amazon's Alexa.
Can you relate that story to us again?
It's fascinating.
- Sure, I used to lead an early IBM AI team.
And when they were not working as fast as I wanted them in the development of an early incarnation of Watson, I departed, stood up a company in Charlotte, North Carolina, and in five years later, it ended up becoming Amazon's first AI-related acquisition that birthed what many of you now know as Alexa.
So Alexa's my older sister's name, which is a coincidence, and the code name for it was Prime.
So the point is, you don't even have to think of AI as alien technology that is being birthed on the West Coast.
The Carolinas actually are part of the formation of some of these ais that people now deal with day in and day out.
- That is an interesting story, Igor.
And Collin, let's bring into the dialogue, how widespread, how deep, how broad is AI in the Carolinas?
Not just from a standpoint of you being a professor at NC State around it, but in general, is it mushrooming?
Is it deeper and wider than maybe we even know?
- I would say it's exploding because we have new companies, of course, moving into the air and expanding.
We've had most of the major players, Google, Apple and so on, expanding in our region.
But also as we've seen, since I'm in the College of Engineering at NC State, is we are seeing companies in all other fields come to us and start inserting AI into their operations.
Engineering companies, supply chain companies, for example, are working on adding AI to monitor their processes, their businesses, help them do what they do.
So we are seeing it spread out throughout the industry.
We're seeing it in demand for all the jobs, and we're seeing, of course, pure AI companies coming as well.
- You know, gentlemen, let's start with a big question.
As we talk about AI, it dominates, and not just business, but personal conversations as well, cocktail conversations, coffee shop, it's about AI.
How did AI, what kind of critical mass did AI reach that caused it to be so effusive and ubiquitous now in all of these conversations only within the last 18 months?
Igor?
- Yeah, so in some ways, it was the perfect storm where you had to have four evolutions.
The first layer was the hardware, right?
The data centers, the servers, the GPUs like Nvidia.
The second layer was the training data that was then available for the third layer, the foundation models to be born.
Those are the large language models that the constructs inside things like Copilot, inside things like Gemini or ChatGPT.
And now the fourth layer is going to be the AI applications.
Now, that's a positive way of framing why this stuff has shown up on the scene over the course of the last 18 to 24 months.
Another way to look at it, unfortunately, is the fact that on the West Coast there's essentially five taboos that were broken in terms of a nonprofit and potentially copyrighted content ending up in these models and an alignment problem in terms of allowing a product to be released that hallucinated.
So these are some of the things that were, I would say, governance firewalls that were holding a lot of other big tech companies from essentially releasing this style of AI.
But now the cat's out of the bag.
- You know, Collin, Igor just used the term hallucinated.
Does hallucinate accurately describe the way that AI presents?
- It's a useful term.
I would say, what's happening, thing to understand about large language models, for example, in generative AI, is that they are mile wide and inch deep models.
Which is to say that ChatGPT has been trained on a lot of checks, a lot of web discourse, copyrighted text, as Igor mentioned.
But it doesn't have a deep knowledge base.
It doesn't understand the world.
So it's prone to making mistakes that we term hallucinations.
And what that really is is it's jumping to shallow understandings or shallow mappings in the vector space.
So that's why you see things like if you use DALL-E and you tell it to draw a pair of hands and you hide a finger, it will generate an extra finger and so on and so forth, because it doesn't have a deep knowledge model that understands things like object occlusion, but all humans do after the age of about six months.
So we term that a hallucination.
It is really the model inventing things that sound right in the chat or look right in the image without understanding the basic semantic content humans have.
- Yeah, somebody said to me recently, "Yeah, I don't trust AI," and I know there's a lot of that out there.
"I don't trust AI 'cause it's garbage in, garbage out," is what they said.
And you know, it's a decent argument.
So I think in the early days here as it deploys and it emerges more and more, what is the risk, and this is for either one of you gentlemen, what is the risk that these early-stage hallucinations or inaccuracies or wrong data interpreted the wrong way scare people away and make them even more opposed to any type of AI deployment?
- I would say it is a high risk.
But I would actually, I would say that it's right to be careful because I think it's actually an even higher risk that's somebody will take one of these models with a shallow understanding, deploy it and do some damage.
So it's, in fact, highly likely.
And you see this with OpenAI, for example.
They have released a platform, people are building apps on it.
You can go online and find apps that promise to be your therapist, promise to give you medical advice, things like that.
It has no knowledge for that.
And so I think there is a high risk that people will take that advice act on it, bad things will happen.
I do think that will scare people, but I think it's equally concerning that that will happen and people will actually be hurt by this.
- Igor?
- So it's tricky.
- Yeah, Igor, please.
Sorry, go ahead - Now the good news is we foresaw this and so seven years ago we started working on an enterprise-oriented version of this style of technology that could be operating inside out from your private enterprise content and resources.
And it can even run on on-premise servers, meaning completely walled off from the outside world.
And it's using a style of technology called retrieval augmented generation, where the answers that are getting rendered, to Collin's point, are only from the internal resources of the hospital system, of the air base, of the nuclear power plant and things of that sort.
That's why we have clients such as Nvidia, such as Westinghouse, such as the Air Force, World Economic Forum, leveraging this style of technology because they can bound the solutions sets to the sources and methods that they trust with full attribution.
Meaning when it renders an answer, you can click on it and it literally goes to the exact page and highlights where it learned it from, because people don't trust technology, as you mentioned, Chris, but they do trust other people.
And so showing the authorship of the underlying answer is very important.
- You know, Igor, so then how do we make sure that the inputs to AI are not politically adulterated?
So when the outputs are not tr trying to, like much of the, I hate to say it this way, but much of the legacy media is more about convincing political viewpoint than it is actually reporting data in the news.
And a lot of folks are very sensitive to that, and they would overlay that on top of AI.
How do they know that the output is about the pure data and is much that can be assimilated?
- Because in this incarnation of AI, the enterprise can control what goes into it.
So we knew that what public web was going to become was a hall of mirrors, right?
Where you don't even know the source and method attribution and who's actually authoring the content.
In many cases, the things that you may be reading in social media could not even be US-originated in terms of thought trying to essentially influence one of our voters.
In this particular platform, the enterprise has to connect the SharePoint repositories that they trust.
SAP, ServiceNow, Salesforce, Documentum, Confluence, all of the existing enterprise software stack that they have gets connected into the AI, and so it's a fully bounded problem in terms of the resources that they trust.
What I typically describe it is it becomes a knowledge cloud that has the four P's: public information they trust from academic institutions such as Collin's or government agencies, published information that's properly licensed into the organization.
For instance, if they want programming from your show, it's properly licensed into the AI model for rendering answers, proprietary stuff that's the crown jewels that defines them as a brand, and then personal information that's for your eyes and ears only.
This is a new way of thinking about the problem and the way that you're gonna see it used in more serious use cases.
- Dr. Lynch, when you use AI, when you use a service, and you may use it to do early research maybe, where do you feel most comfortable using it, and how do you know that the credibility and, yeah, the credibility of the data that goes into that is giving you an output that you could actually use to go further?
- Well, I would say I tell my students to not use random services at all to search for information.
I tend to prefer to use, you know, more classical search systems to locate information.
But you know, to Igor's point, I am locating the information, following it up, reviewing it myself.
I have had students go down rabbit holes trying to use an AI aggregator for information.
They'll just get lost.
So I don't trust it for finding new information.
I do however, trust models, we control, open-source foundational models typically that we use much as described with our own data to filter or generate information that we trust.
So it's not something where I am going to look to the AI to distill information for me.
I'm gonna look to the AI to locate sources for me and stop there.
- Yeah, Igor, how do you characterize the same question?
- Yeah, and that's why we always fall back to the safety net of the enterprise defining the resources that they would be pulling in.
So in the example of this particular program, if you wanted to make an AI, you know, essentially sharing the thought leadership of various guests, you only point it to the videos that you've recorded over the sum total of three decades, and that's where it's rendering its answers from.
There's no other content that it could use in terms of generating its particular output.
And that's why we've seen, for instance, it being used in outage and maintenance services at a nuclear reactor site because they predicted, if they had such an AI, they could reduce the downtime of these very serious constructs by half, meaning they don't have to spin up fossil-fuel-burning plants, especially in deep summer and deep winter.
But remember, the output is only coming from trusted data sources that were authored by their engineers and technicians.
- Gentlemen, what are we thinking about?
What are we not thinking about?
Because we're think, it's such early days.
And we like it, it's like a novelty, and I've got it on my phone and I can talk to it and it talks back to me.
And wow, it can actually coherently or cognitively converse with me.
But what are we not thinking about that we need to pay attention to?
- I think I would say there's two things we're not thinking about enough.
The first is that AI has become very much a service.
And so when you talk to your phone, for example, when you use various apps that, oh, it talks to me, it generates something, that's great.
We're not realizing that, you know, OpenAI, for example, Microsoft and Gemini, a few of the others are becoming foundational models and foundational services that everybody's building on top of, which means a flaw in that core model or in that core service, say, a hallucination or bad information just gets propagated everywhere.
And all of our use of that gets tracked.
So we're starting to see a situation where people are building lots and lots of tools on top of the same untrusted foundation, and that can have long-term ripple effects.
The second thing I think we're not thinking about is the extent to which we are already experiencing a world mediated by AI.
So you mentioned information, you know, coming from various sources attempting to influence us.
We're already seeing that.
In fact, research has shown that most people are already getting their news information online, either through social media or other sources, which means we're already experiencing events like, say, a debate mediated through a lens, not necessarily an explicitly partisan lens, just one that always confirms our priors, which can make us overestimate things that we see again and again and again.
And that's just changing the way we approach things, changing the way we communicate in effect, right?
If I go on social media and I click on a story about crime, I'll see more stories about crime.
And research from the 70s shows that we'll overestimate how much crime there is once we do that.
Igor?
- So it's a simple example.
- Yeah, and building on top that, I mean, you have to think of these things as pretending to be analysts.
They're pretending to be attorneys, they're pretending to be physicians in the same way that any of our family members or friends that are in those fields, they would walk past a television set playing "ER" and "Law & Order" and inevitably groan that they would be sued for malpractice or disbarred or the patient would would pass if they actually followed that procedure or the person would be arrested if they did that.
So the point is the, especially the B2C, right?
The consumer incarnations of this technology are interesting entertainment tools in some ways just for you to kind of, you know, poke around particular topics.
But just know you're getting things that sound fantastical and true, but they may not be.
And that's why there has to be a separate wall-gardened version of this thing where enterprises and government agencies take control of their own knowledge cloud that has information that they trust from both internal and external sources with proper attribution and authorship.
And those are the ones that they're gonna be able to use for decision support and decision advantage.
- Let me ask you both this question, Igor, I wanna stay with you for right now, at least.
At least as you know now, are the debates rigorous enough around what regulatory framework needs to be put in place on public policy, either federal or state or local?
- I think one of the exciting opportunities is, in the same way that Carolina-headquartered banks rose to such prominence because we created a southeastern banking compact and were able to align the regulatory environment to allow these things to essentially thrive as a super region, we also have an opportunity, maybe individually North and South Carolina as we try to, you know, regulate these style of technologies and where they're good to be used and where we have to watch out, we have a similar opportunity to create a construct between Virginia, North and South Carolina, Georgia, so on and so forth in order to align the state attorney generals into one super block.
Because right now, California's leading the charge on trying to define how these technologies can be used and not used.
And I think if we want these things to be created with our values intact, while individually, we may not have any sort of critical mass where we can affect the outcome, especially if some of the big tech incarnations of this technology, there is a path in the past that was properly leveraged that we can use in the future as well to regulate these things.
- Yeah, let do a quick follow-up with you on that, Igor.
So you infer, and maybe I got this wrong, that really the power and the control over the regulatory idea to enhance or govern or encourage AI cooperation belongs on the state level.
Is there a federal element of this?
- Of course, right?
You had the Biden Administration releasing an executive order in terms of how this stuff could be used in civilian use cases.
There's an upcoming national security memorandum on AI coming out in terms of military and intelligence use of these technologies as well.
But at the state level, you know, just like you saw GDPR in Europe and now the EU AI Act, California did CPCPA on the privacy side, there is an opportunity to start defining proper governance of these technologies at the state level as well.
- Collin, to take a little bit of different tack, 'cause we're actually running out of time, both secretaries of commerce in North and South Carolina, Machelle Sanders and Harry Lightsey have both used this term when they talked about economic development on this program actually.
And they said that there is a looming energy crisis when it comes to growth and economic development.
If that's the case, and we've heard many stories about how AI needs much more energy than just kind of a passing plug into a wall outlet, Collin, is there a looming energy crisis and how do we fix that if there is to make sure AI gets the oxygen in energy that it needs?
- I think there is gonna be a looming crisis because as you say, AI is demanding or the data centers for AI, our need for large-scale, always-on responsive AI is demanding lots and lots of power, growing by insane amounts and we simply cannot build power plants to keep up with it.
But the thing is, nothing requires us to be building AI exactly this way.
Many of the models that we are using that we are most attracted to the, you know, instant response from, say, Gemini, we don't actually have to do that for every enterprise, right?
We have foundational models that can be used locally.
We have ways of hosting local, smaller-tuned versions.
So I think that there is a crisis.
I do think we will need to improve our construction of, say, renewable energy resources to satisfy that without creating other problems.
But I think we can also solve it by, and this has happened in the past with AI, facing the pressure and slimming down AI, making models that are smaller, targeted, lightweight.
That much we can do.
We just haven't faced the pressure to do it as much.
- We have about two minutes left, and let me ask you both this question, so we're not gonna have a lot of time to unpack this idea.
But Collin, I wanna stay with you just quickly here.
Where do you see, let's just say two or three years from now, where is AI gonna be most prevalent?
Now it's the novelty, now we're talking to it on the phones, but where do you think is gonna be the obvious place that it's really excelled?
- I think it's going to, we're gonna see it more is in our day-to-day lives.
We're gonna experience a lot more just smaller agent-base systems.
So it's novel now, but we're gonna see a lot more tools on the phone, tools in our lives that mediate information, filter it for us and generate, and people are just gonna experience that.
It will be Alexa everywhere, if you'll.
But the other thing is, I think we are getting a lot more AI on the road, AI in the skies, self-driving vehicles, self delivery and so on.
That is gonna explode, I think, more into public consciousness.
If you think about it, most new cars already have some measure of AI.
We're gonna see a lot more of that.
- Okay, thank you, Collin.
We have about a minute left, Igor, where do you think it's gonna be most deployed?
Okay, Igor, we're gonna ask you something.
Can you check the muting of your mic because it seems we don't hear your audio, and we've done- - Oh, very good.
- There we go.
Yeah, you got about 30 seconds.
Go ahead.
- Yeah, very good.
Yeah, it's gonna be sitting on top of every data store that you can think of, contracts, financial instruments and things of that sort.
It's going to be leveraged by boards of directors to look for insider risks before people have an opportunity to short them.
So I think it's gonna create for more transparent governance structures, it's gonna reduce service delivery times and what have you.
So there's a lot of positives that it's gonna unlock for us as well.
- Igor, thanks.
I hate to cut you both off because, you know, we're just scratching the surface, of course.
And that's no pun intended.
We truly are.
Thank you both for joining us, especially during a little bit of a harried time here at the end of the Hurricane Helene.
Igor, good to see ya.
Safe travels.
Collin, also thank you.
Until next week, I'm Chris Williams.
Goodnight.
(pensive music) - [Announcer] Gratefully acknowledging support by: Martin Marietta, BlueCross BlueShield of South Carolina, Truliant Federal Credit Union, Sonoco, Colonial Life, High Point University, and by viewers like you.
Thank you.
(upbeat music)


- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.












Support for PBS provided by:
Carolina Business Review is a local public television program presented by PBS Charlotte
