By — Geoff Bennett Geoff Bennett By — Courtney Norris Courtney Norris By — Dorothy Hastings Dorothy Hastings Leave your feedback Share Copy URL https://www.pbs.org/newshour/show/the-potential-dangers-as-artificial-intelligence-grows-more-sophisticated-and-popular Email Facebook Twitter LinkedIn Pinterest Tumblr Share on Facebook Share on Twitter Transcript Audio Over the past few months, artificial intelligence has managed to create award-winning art, pass the bar exam and even diagnose illnesses better than some doctors. But as AI grows more sophisticated and popular, the voices warning against the potential dangers are growing louder. Geoff Bennett discussed the concerns with Seth Dorbin of the Responsible AI Institute. Read the Full Transcript Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors. Geoff Bennett: Over the past few months, artificial intelligence has managed to create award-winning art, pass the bar exam and even diagnose illnesses better than some doctors.But as A.I. grows more sophisticated and popular, the voices warning against the potential dangers are growing louder. Italy has become the first Western nation to temporarily ban the A.I. tool ChatGPT over data privacy concerns, and more European countries are expected to follow suit.Here at home, President Biden met yesterday with a team of science and tech advisers on the issue and said tech companies must ensure their A.I. products are safe for consumers.We're joined now by Seth Dobrin, president of the Responsible A.I. Institute and former global chief artificial intelligence officer for IBM.It's great to have you here. Seth Dobrin, President, Responsible A.I. Institute: Yes, thanks for having me, Geoff. I really appreciate it. Geoff Bennett: And most people, when they think of A.I., they're thinking of Siri on their cell phones. They're thinking of Alexa or the Google Assistant.What kind of advanced A.I. technology are we talking about here? What can it do? Seth Dobrin: Yes, so what we're talking about here is primarily technology called large language models or foundational models.These are very, very large models that are trained, essentially, on the whole of the Internet. And that's the promise, as well as the scary thing about them is that the Internet basically reflects human behavior, human norms, the good, the bad about us. And the A.I. is trained on that same information.And so for, instance, OpenAI, which is the company that built ChatGPT, which most everyone in the world is aware of at this point… Geoff Bennett: There are a few who still aren't, but… Seth Dobrin: Yes, a few who still aren't, yes.(LAUGHTER) Seth Dobrin: But it was trained on Reddit, right, which, from a content perspective, is really not where I would pick. But from how do you train a machine to understand how humans converse, it's great.And so it's pulling the good and the bad from the Internet, and it does this in a way… Geoff Bennett: Because, we should say, Reddit is like a chat site. Seth Dobrin: Yes, yes, Reddit is a chat site. And you get all these bad conversations going on and things called subreddits. And so there's a lot of hate, there's a lot of misogyny, there's a lot of racism that's in the various subreddits, if you will.And if you think about what it's ultimately trying — what it's ultimately doing, it's essentially — think of it as auto-complete, but on a lot of steroids, because all it's doing is, it's predicting what's going to happen next based on what you put into it. Geoff Bennett: Well, the concerns about the potential risks are so great that more than 1,000 tech leaders and academics wrote this letter recently, as you know, calling for a temporary halt of advanced A.I. developmentAnd part of it reads this way: "Recent months have seeing A.I. labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or reliably control."What is happening in the industry that is causing that kind of alarm? Seth Dobrin: So, I think — I think there is some concern, to be honest.This technology was let out of the bag, it was put into the wild in a way that any human can use it in the form of a conversational interface ChatGPT. The same technology has been out available for A.I. engineers and data scientists, which are the professionals and that work in this field, for a number of years now.But it's been in a — what's called a closed beta, meaning only approved people could get access to it. In that controlled environment, it was good, because OpenAI and others — OpenAI makes ChatGPT — and others were able to interact with it and learn and give them feedback, like things like, when the first one came out, you could put in what is Seth Dobrin's Social Security number, and it would give it to you, right? Geoff Bennett: Wow. Seth Dobrin: Or my — what is every address Seth has ever lived at? And it would give it to you. It doesn't do that anymore. But these are the kinds of things that, in the closed environment, could be controlled.Now, putting this out in the wild is — there's been lots of pick your own metaphor, right, your own nihilistic metaphor. It's like giving people — the world uranium and not teaching them how to build a nuclear reactor, or giving them a bioagent, and not teaching them about how to control it.It's really that — can be that scary. But there are some things that companies can do and should do to get it under control. Geoff Bennett: Like what? Seth Dobrin: So, I think if you look at what the E.U. is doing, so they have an A.I. regulation that's regulating outcomes. So anything that impacts health, wealth, or livelihood of a human should be regulated.There's also — so, I'm president of the Responsible A.I. Institute. What we do is, we build — so the letter also calls for tools to assess these things. That's what we do that. We are a nonprofit, and we build tools that are align to global standards. So, some of your viewers have probably heard of ISO standards, or CE. You have a CE stamper or UL stamp on every lightbulb you ever look at.We build standards for — we build a ways to align or conform to standards for A.I. And they're applicable to these types of A.I. as well. But what's important — and this gets to the heart of the letter as well — is, we don't try and understand what the model is doing. We measure the outcome, because, quite honestly, if you or I are getting a mortgage, we don't care if the model is biased.What we care is, is the outcome biased, right? We don't necessarily need the model explained. We need to understand why a decision was made. And it's typically the interaction between the A.I. and the human that drives that, not just the A.I. and not just the human. Geoff Bennett: We have about 30 seconds left.It strikes me that the industry is going to have to police itself, because this technology is advancing so quickly that governments can't keep pace with the legislation and the regulations required. Seth Dobrin: Yes, I mean, I think it's not much different than we saw with social media, right?I mean, I think if you were to bring Sam Altman to Congress, probably get about as good responses as Mark Zuckerberg did, right? The congresspeople need to really educate themselves. If we, as citizens of the U.S. and of the world really think this is something that we want the governments to regulate, we need to make that a ballot box issue, and not some of these other things that we're voting on that I think are less impactful. Geoff Bennett: Seth Dobrin thanks so much for your insights and for coming in. It's good to see you. Seth Dobrin: Yes, thanks for having me, Geoff. Really appreciate it. Listen to this Segment Watch Watch the Full Episode PBS NewsHour from Apr 05, 2023 By — Geoff Bennett Geoff Bennett Geoff Bennett serves as co-anchor and co-managing editor of PBS News Hour. He also serves as an NBC News and MSNBC political contributor. @GeoffRBennett By — Courtney Norris Courtney Norris Courtney Norris is the deputy senior producer of national affairs for the NewsHour. She can be reached at cnorris@newshour.org or on Twitter @courtneyknorris @courtneyknorris By — Dorothy Hastings Dorothy Hastings