Subscribe to Here’s the Deal, our politics
newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
Leave your feedback
Artificial intelligence was a focus on Capitol Hill Tuesday. Many believe AI could revolutionize, and perhaps upend, considerable aspects of our lives. At a Senate hearing, some said AI could be as momentous as the industrial revolution and others warned it’s akin to developing the atomic bomb. William Brangham discussed that with Gary Marcus, who was one of those who testified before the Senate.
Aside from debt ceiling negotiations, Capitol Hill was also focused today on what to do about artificial intelligence, the fast-evolving, remarkably powerful computer technologies that many believe could revolutionize and perhaps upend many aspects of our lives.
The metaphors used to describe A.I. at a Senate hearing today reflected this spectrum. Some said this could be as momentous as the Industrial Revolution. Others warned it's akin to developing the atomic bomb.
Sen. Josh Hawley (R-MO):
We could be looking at one of the most significant technological innovations in human history.
Today, senators and experts weighed in on the gravity and growing risks of rapidly developing A.I.
Gary Marcus, Co-Author, "Rebooting A.I.": We have unprecedented opportunities here, but we are also facing a perfect storm.
Sam Altman is the CEO and founder of OpenAI, which is at the forefront of this new technology.
Sam Altman, CEO, OpenAI:
But as this technology advances, we understand that people are anxious about how it could change the way we live. We are too.
Artificial intelligence gained prominence when his company's product ChatGPT was launched in November. It can answer complex questions with humanlike responses at startling speeds. But it also makes big mistakes.
A.I. technology can also generate remarkably realistic images or audio known as deepfakes in an instant, like this one of Pope Francis sporting a coat he never wore. A.I. can also imitate people's speech, as Senator Richard Blumenthal demonstrated today.
Sen. Richard Blumenthal (D-CT):
And now for some introductory remarks.
Too often, we have seen what happens when technology outpaces regulation.
Sen. Richard Blumenthal:
You might have thought that voice was mine and the words from me. But, in fact, that voice was not mine.
In the hearing today, senators raised a series of concerns about this technology, among them, how this example of voice mimicry could be used to spread disinformation.
What if it had provided an endorsement of Ukraine surrendering or Vladimir Putin's leadership?
Others raised concerns about everything from privacy, to copyright protections, to the potential elimination of people's jobs.
Sen. Josh Hawley:
So, loss of jobs, invasion of privacy, personal privacy, on a scale we have never before seen, manipulation of personal behavior, manipulation of personal opinions, and potentially the degradation of free elections in America. Did I miss anything?
So, the question of the day, can A.I. be successfully regulated? Many agreed there should be some governmental body to establish global rules and norms, even issuing or revoking licenses to A.I. systems.
I would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away and ensure compliance with safety standards.
But some lawmakers doubt they have the knowledge to even handle it.
Sen. Richard Durbin (D-IL):
The magnitude of the challenge you're giving us is substantial. I'm not sure that we respond quickly and with enough expertise to deal with it.
Gary Marcus, a leading voice in A.I., emphasized the need for an international body like those that regulate nuclear research.
Ultimately, we may need something like CERN, global, international and neutral, but focused on A.I. safety, rather than high-energy physics.
While he supports regulation, Altman warns that it should not stunt the benefits of this new technology.
I think it's important that any — any new approach, any new law does not stop the innovation from happening with smaller companies, open-source models, researchers that are doing work at a smaller scale. That's a wonderful part of this ecosystem and of America. We don't want to slow that down.
And joining me now is one of those who testified before the Senate today.
Gary Marcus is the co-author of "Rebooting A.I."
Gary, thank you so much for being here.
Before we get into the dangers of A.I. that you laid out before the Senate today, I know that you have said that you have loved A.I. since you were a little kid, and you see some remarkable potential for this technology for humanity. Make that case for A.I. first.
Well, I love it as a cognitive scientist. I have studied how children learn language for a lot of my career. And it's just a fascinating intellectual question to solve it.
But, also, there's a potential, I think, to revolutionize science and medicine to help us solve things that we can't solve on our own., for example, molecular biology, has tens of thousands or hundreds of thousands of molecules in the body. No one human can understand that.
A.I. might really revolutionize medicine. It might also help us with climate change. We might be able to build eldercare robots that help us with the upcoming demographic inversion, where we have more elderly people than young people, to take care of them.
So, there are lots of practical implications. And it's also just cool. It's just interesting for anybody who's grown up on science fiction books, written computer programs, and so forth. So, I have always been interested in it, and I would like to see it succeed.
So, as you mentioned, science fiction often has pointed to some of the potential downsides of that.
And you were warning about some of those today before the Senate. What are the other major concerns that you have about this technology?
I mean, it's actually a long list. And I keep thinking of Donald Rumsfeld's quote about unknown unknowns. We don't really know the scope of it.
But just to start with, we know that democracy is threatened because we're likely to enter a regime where bad actors can use these tools to make essentially infinite amounts of misinformation at zero cost. That's incredibly plausible.
And so we're going to enter an era where nobody trusts anything. That's obviously not good. We're going to enter an era where everybody denies any evidence presented to them in court and says, well, that's just made up, even when it isn't.
We're going to have chatbots that encourage people perhaps to commit suicide or do other terrible things, giving bad medical advice, medical — psychiatric advice. We also have the potential for a lot of cyber crime. So these new tools can be used to manipulate people, and to do so, again, at a scale we haven't seen before.
And, in the long term, we don't really know what would happen if machines got out of control and did things we don't want them to do. We're not really prepared for any of this.
Are there any current examples of A.I. doing some of these things that are the seeds for these future fears that you have?
NewsGuard has are already just released a study showing like 40 or 50 different Web sites are generating their news automatically.
We have already had things like CNET generate automatic news that turned out to be flawed. I guess those are the major first initial signs. We have already seen at least one case of a suicide that seems to be associated with a chatbot, where the chatbot — I don't even want to say on air, but gave bad advice and didn't refer somebody to a professional.
So those are just some examples we have already seen. And we will see a lot more.
So there is not a great track record when it comes to the government regulating effectively technologies.
But yet you were a part of the group that was arguing today that we have to do something to try to marshal the governmental powers to get around this technology. What would you like to see done?
Well, in fact, I argued in my TED Talk and a few weeks ago in an invited column for "The Economist" that what we need here is an international A.I. agency that brings together scientists, government and companies to try to figure out what's best here, to try to align that global policy.
There was pretty strong agreement in the room that that might be a good idea, that at least we need to do that at a national level. I don't think existing agencies are really up to the task of keeping up with the speed at which A.I. is evolving and coordinating with each other.
I think we need some kind of central regulation around this. There's lots of complicated political problems. But there was a strong bipartisan sense in the room that we do need to do something along these lines. One of the specifics that I called for in the room was that we have something like an FDA procedure for large models that are deployed en masse.
So if you had something that was used by 1,000 people, that's a research project. That's fine. But if you want to release something to 100 million people, then you should do a safety analysis, and there should be someone outside of your company that evaluates that and make sure it's OK.
And then, after the fact, there should be additional monitoring as new issues come up.
Because we haven't really seen that thus far.
What we have seen is basically the development of these tools, and then their relatively quick release into broad use in the public.
The ChatGPT, I don't think anybody was really prepared that it would go out to — more than 100 million people would subscribe. I think that took everybody in the field by surprise. The technology is not that different from what we saw a couple years ago. And so none of us in the field realized how much the public would resonate.
And there isn't really a culture now or a standard now about, when is it OK to release it to everybody?
Isn't the horse already out of the barn in this regard? I mean, let's just say you come up with some global governance strategy here, and Google and Microsoft and the Americans and the Chinese developers and the Indian developers all get on board.
Won't there always still be rogue actors who can use smaller versions of this technology to do all of the things that you're most worried about?
Some of the horses are out of the barn and some aren't.
So we already, for example, have publicly available models that can be used to generate misinformation. I believe that we can build new technologies, but it's going to be hard, to try to mitigate some of those risks. There are other technologies that haven't even been built yet, like self-improving machines or self-aware machines that maybe we don't want to build at all.
And so I think is kind of a dress rehearsal for even more sophisticated technologies than the ones that we have now, which are flawed and unreliable. And it's an opportunity to learn how to close at least some barn doors before all the horses have escaped.
You and several other people who have been studying this for a long time have been raising these warnings of late.
Do you think we, as a society, are going to hear these warnings in time?
I was really heartened by what I heard today in the Senate.
I saw a lot of bipartisan alignment and a lot of recognition that we hadn't really handled the Internet right. I saw a lot of people with, I think, intellectual honesty and humility who were very much wanting to do the right thing here.
And so there's a long way to go from here, but I couldn't have been more satisfied with the meeting.
All right, Gary Marcus host of the podcast "Humans vs. Machines," thank you so much for being here.
Thanks for having me.
Watch the Full Episode
William Brangham is a correspondent and producer for PBS NewsHour in Washington, D.C. He joined the flagship PBS program in 2015, after spending two years with PBS NewsHour Weekend in New York City.
Courtney Norris is the deputy senior producer of national affairs for the NewsHour. She can be reached at email@example.com or on Twitter @courtneyknorris
Jonah Anderson is a News Assistant at the PBS NewsHour.
Support Provided By: