03.10.2025

Reid Hoffman on What Could Possibly Go Right With Our AI Future

Read Transcript EXPAND

BIANNA GOLODRYGA, ANCHOR: Well, we turn now to an issue that’s been puzzling governments, investors, and ordinary people alike, the potential benefits of artificial intelligence weighed against the risks. Our next guest, Reid Hoffman, co- founder of LinkedIn, believes that the list of benefits is long. He joined Walter Isaacson to talk about the new book he co-authored, “Superagency.”

(BEGIN VIDEOTAPE)

WALTER ISAACSON, CO-HOST, AMANPOUR AND CO.: Thank you, Bianna. And, Reid Hoffman, welcome to the show.

REID HOFFMAN, CO-FOUNDER, LINKEDIN AND CO-AUTHOR, “SUPERAGENCY”: It’s great to be here.

ISAACSON: And you’ve been in the forefront of everything in the digital revolution, you know, from LinkedIn to PayPal. And now, of course, artificial intelligence. You’ve got a new book called “Superagency.” It’s very optimistic about A.I. But let me start with the title and with the word agency, let’s leave aside superagency for a minute, just agency. When we humans use it, it means we have a free will to make a plan and take an action. Is that what you mean here? And can computers do that?

HOFFMAN: Well, I’m not saying that computers can do that. This is actually a book about human agency, both individual and societal, hence the kind of superagency and superpowers. But — by the way, the philosophers might dispute different definitions of whether or not it’s agency and free will. But it is agency is expressions of our fundamental kind of humanity and how we organize our lives and connect with each other. And the thesis around superagency is that A.I., like other general-purpose technologies in our history, will help us do that.

ISAACSON: Right. And what you talk about in the book is a notion that working with the symbiosis between the machines and ourselves will empower us more. Explain why you think we get empowered more when our machines are doing more and more things.

HOFFMAN: Well, I mean, for example, take cars as a history, which by the way, they did have a similar discussion about whether or not to destroy human society, change patterns of human interaction, et cetera. And cars actually give you agency, they give you superpowers, the ability to go faster, further distances, et cetera. And by the way, you get superagency with cars, because not only do you get that superpower, but other people do too. A doctor might be able to come visit you versus you having to try to get to the doctor. And computers, I think, and A.I. will do the same thing.

ISAACSON: When you say that it’ll be safe and it’ll respond to our needs, let me quote a sentence where you say, we’re going to pursue a future in which billions of people around the world get equitable, hands-on access to experiment with these technologies. First of all, let me ask you, why is that important?

HOFFMAN: Well, it’s important in part because part of how we get — you know, just think about, for example, Tim Cook’s iPhone is the same one that the cab driver and Uber driver is using. And so, it’s — the fact when you get it, technology is designed for hundreds of millions of people, billions of people, and they’re engaged with them, then we’re all being elevated and our society itself is getting superagency together. And so, that inclusion helps us direct it in ways that are better, not just for individuals, but also for society.

ISAACSON: You talk about a cognitive industrial revolution. And, you know, I love looking at history of all the various agricultural industrial revolutions. Compare this one to the previous revolutions we’ve had driven by technology.

HOFFMAN: Well, part of the reason I call it a cognitive industrial revolution is I want both the — kind of the upside to be well understood, which is nothing of our society, the way we exist, middle class, education, medicine, you know, this very technology that you and I are talking on today, exists without the industrial revolution. So, the outcome — the targeted outcome is — can be amazing and should be amazing. But it’s also we have to navigate the transitions, because the transitions, like the transition into the industrial revolution necessitated a lot of kind of difficult adjustments within human society. You know, we need to try labor laws. We needed, you know, work weeks. We need a whole bunch of other things in order to make that work. And I anticipate we’re going to have similar transition issues. And so, we need to navigate those, hopefully having learned, with more humanity and more grace than earlier transitions.

ISAACSON: Well, tell me about some of the transition rules that you think are necessary for A.I.

HOFFMAN: Well, so It is the fact that, look, people say, oh, it’s going to be a lot of whole bunch of job replacement, and there will be some job replacement. Take, for example, customer service where a human is following, is basically trying to be a robot following a script. A.I. will do that much better. But even a lot of other jobs will be transformed. So, when you previously needed these like 20 skills, you now need only call it eight of those skills and then five more in order to do the job, like if you’re a graphic designer, your ability to do things fine-tuned with your, you know, graphic disparity, you know, manual dexterity is very important. Now, visual thinking is still important, but the manual dexterity may be less important. And that’s kind of a job transition. So, we need A.I.s. What — the thing I would be asking for from the companies, from government is A.I.s that help people with these transitions? You know, which few skills they need to pick up? Is there going to still be doing this job, which will be a human with an A.I. replacing a human, being the same human doing that, or helping you find other jobs, helping you learn other jobs, helping you do other jobs? And we want A.I.s helping us with all of that.

ISAACSON: You know, when you talk about it, the role seems even grander. They’re just helping us with our jobs because I’m going to read you a sentence from the book. You say, every new technology we’ve invented, from languages, to books, you’re talking about the printing press, to the mobile phone, has defined, redefined, deepened, and expanded what it means to be human. So, tell me how this will expand what it means to be human.

HOFFMAN: Well, so part of it is we are actually more homo techne than homo sapiens. We evolve through our technology, our genetics are very much basically the same over the last 4,000 years, which is kind of the recorded history of human civilization. And yet, the way that, you know, I can see literally through glasses, et cetera A.I. is going to be doing the same thing. It’s going to be saying, hey, you want to learn something, you want to know something, you want to navigate the world? We call A.I. an informational GPS. And so, just like you use GPS to navigate like a new city that you’re going to, all information spaces, medical, work, education, creative can be amplified with this information GPS, and that’s the kind of — that then becomes a new part of what it is to be human, because we have that in helping us with our navigation, just as our smartphones and a map does today.

ISAACSON: One of the things you talk about as part of your subtitle is we should talk about what could go right. I know you talk about Madison. You’ve been involved with cancer research. Tell me the big things that can go right with this.

HOFFMAN: Well, so, you know, as you were gesturing at, Manas, a company that I co-founded with Siddhartha Mukherjee, you know, we’re trying to cure cancer with A.I. accelerations and drug discovery, but even more immediate, like line of sight, not building new things is a medical assistant, 24 by seven, that can be in every pocket. You have a concern about yourself, your child, your sibling, you know, your parent, you can get an answer. A tutor on every subject for every age. A legal assistant. You’re like, well, I can’t afford a lawyer. And I’m trying to figure out this rental contract. Well, actually, in fact, A.I.s, ChatGPT, Copilot, et cetera, can help you with that today. And so, those are all ways that your life gets kind of magnified. And of course, it’ll end up being even more things than these things that we can see today. Part of new general-purpose technologies is that they create new things that we didn’t even imagine was possible. And that’s part, of course, the excitement of, you know, being an inventor and investor in these things.

ISAACSON: Give me an example of something where you probably aren’t imagining that you think might happen.

HOFFMAN: Well, OK, this might be a little wonkish for our audience, but like, for example, one of the reasons why that all of these frontier labs are building coding assistance is not just to make engineers more efficient, yes, that is. But like, now you’re going to have Copilots that can be a software engineer for all of us. So, for example, if I said, hey, I want to create a new kind of, you know, kind of game that I can play with my friends and I have this idea of something that I could possibly do, I will then go create that game and I can have a soft bite. My A.I. software engineer helped create it for me. And that’s just like one example of lots. It could be doing research, it could be, you know, figuring out how like I want to have a home app to coordinate with my family. All of those things are — we are now all going to have a software agent copilot within a relatively small number of years.

ISAACSON: You talk about superagency, and you say that in this book that A.I. will enhance human agency. But so much of what’s happened with social media recently and with digital revolution in general seems to have reduced our agency in some ways. It’s made it harder for us to be in control of our lives. Explain that and what can be done.

HOFFMAN: Part of what we’re trying to advocate for in the book and kind of how technologies build things, how — what are the things we should want or things that increase our agency, part of it is that to say, well, imagine you had an agent that was on your side and you’re reading something that was for you and was essentially enabling you, and you read something and you say, I’m not sure that’s quite right. And then, the agent would say, well, here are three things that you might consider as interesting sources to figure this out or, hey, you’re believing this thing that you’re reading or you’re seeing, here’s some cross things that check. Like, for example, one of the things that we’re already seeing with A.I. is like cybercriminal fishing. So, you can get a phone call. It’s trying to persuade you that it’s your brother, sister, child, partner, et cetera, and it starts saying, hey, send money to here. Well, you also will have an agent that says, oh, hey, this sounds like this could be something that’s not good for you, why don’t you ask something about what only the two of you know in order to make sure and can be part of the protection? And this is what we’re essentially building to as we go through these transitions to the other side.

ISAACSON: And how will A.I. then help us with disinformation in general, the type of disinformation that’s gotten very controversial now, but seems to be undermining our democracy?

HOFFMAN: Well, as you know, this is a deeply political topic. I mean, what counts is misinformation, what counts is disinformation. There’s a lot of different views on and coming to coherent set of views on that as kind of the bulk of America is actually very difficult. Now, that being said, I think there’s a lot of ways that these agents can actually, in fact, kind of be on our side, helping like fact check and do other kinds of things. And so, take, for example, you know, you’re reading some information and some media site or social media site that says, you know, vaccines are known to be very dangerous and they like plant chips in you and they cause autism and say, well, actually, in fact, here’s what a bunch of different scientific studies show, here’s what some of the people who challenged them say, and here’s how you can inform yourself in a good way. And I think that’s the kind of thing that we want these agents to be helping us with. And, you know, part of it is if you have an agent that’s kind of saying, hey, look out for me when I’m doing this. And I go, hey, I’ve seen this agent look out for me in various ways, like look out for me in terms of my health, look out for me in terms of making like financial decisions or reading my rental lease, then it’s like, oh, and then I trust it in telling me these things too. And that’s part of my hope is that we get through the trip (ph) as we work our way through the transition that we actually have a more of a common based knowledge approach that we can then be finding more things of, ah, here’s how we can get to agreement and realize what this information possibly is.

ISAACSON: Let me ask you about regulation. I just came back from Europe and in Paris, they had the A.I. summit. And the previous A.I. summits, it had all been how are we going to regulate this now in Europe, they’re totally panicked, that Mistral, one of their A.I. companies that does LeChat, which competes with ChatGPT, is going to be hindered because there’ll be too much regulation. If there’s a whole lot of competition and a whole lot of companies and a whole lot of countries, from China to France to the U.S., doing it, how can we regulate it?

HOFFMAN: Well, I think that, first off, we should try to keep the initial regulation very focused. It’s like, how do we make sure that, you know, criminals, terrorists, rogue states aren’t doing things? How do we make sure that we don’t make some critical large error, right, in terms of how this stuff works? Not how do we cover every single thing? Not how do we have it not, you know, do something that might be like hallucination or a misstatement of some kind of fact or other kinds of things, but we can adjust those as it — as we iterate and develop? And part of what I think is really good, I think the diversity of multiple things. I actually personally think that actually having the French, the British, the Germans, the Italians actually in this and helping shape that I think helps it be for more humanity generally. It’s — any one culture only has its own lens. And as much as, of course, you know, Walter, you and I love, you know, kind of a lot of the deep American values and American culture, part of the reason is because we learn from others as well. It’s one of the things that I think is a — you know, our aspirational American values. And I think that’s a good thing. And so, I think regulation is possible. It should be on specifically slowly learning as we’re doing.

ISAACSON: You’re been a supporter of Democrats. I think you supported Kamala Harris, but so many of the tech bro friends you have from Silicon Valley have now been flocking to Washington. Elon Musk being very involved with the administration, but so many others. What caused this shift in the Silicon Valley mentality? And do you worry when you see an inauguration that has all tech bros sitting in front of the cabinet?

HOFFMAN: Well, I think there’s two things. So, one is if I said any government, including the American, is deeply talking the technology industry about how the future is, I would say that’s good. And that includes our current administration and so on. Now, I think one of the reasons why a lot of technologists said, hey, we’re going to work with the current administration, is not just, of course, because it was, you know, completely fairly and democratically elected, but also, it’s the, hey, you also believe that the technology industry is important about how we create the future jobs, the future industry, the future services that help us and become part of what we can export to the world, and that really matters. And if you have one party arguing that that’s good and one party arguing that that’s bad, the industry — you know, the tech industry will more — you know, will lean in some percentage to the party that argues that that’s good.

ISAACSON: You were also involved in the lawsuit against Trump. I think you helped fund E. Jean Carroll. And he’s become very vindictive against even the law firms that have been involved. Do you worry about that revenge and vindictiveness, the bulliness that we sometimes see coming out of Washington?

HOFFMAN: Well, of course I do. I think it’s only human and natural to do so. I mean, like when he removed Mark Milley’s protection detail, you know, a general who has put his whole life in service of the U.S. and engaged in conflict around the world and who was a target of essentially, you know, like the opponents of America, like Iran, you know, if he’s willing to do that, what else is he willing to do? That’s, of course, very deeply concerning.

ISAACSON: I’m going to read you a sentence that really struck me in your book, which is by maintaining our development lead, we’re infusing A.I. technologies with democratic values and integrating these technologies across society in ways that bolster our economic power, our national security, and our ability to broadly project our global influence. Explain why you think it’s important for the U.S. to maintain the lead while also infusing it with our values.

HOFFMAN: So, technology shaped the world, and part of the reason why the, you know, kind of Europeans had, you know, global impact for centuries was they were the first developers and broad adopters of the Industrial Revolution. This is the cognitive Industrial Revolution. The same kind of thing is going to shape like what are the societies and industries that have power and what are the things that, you know, kind of effect in terms of the shape of technology, how we navigate the world, you know, this kind of informational GPS. And so, a system that says, hey, it’s really important that individuals have autonomy, have agency, that we make decisions as a collective group together within kind of democratic processes is, I think, some of the great things that America and other countries have brought to the world, and I think that’s important that we continue to elaborate them. And the way we do that is by being on the forefront of this technology. And so, you know, sometimes when I’m really trying to push this point home within a, you know, kind of an American context is, we want A.I. to be American intelligence.

ISAACSON: Reid Hoffman, thank you so much for joining us.

HOFFMAN: Walter. Always a pleasure.

About This Episode EXPAND

Former NATO Supreme Allied Commander on what this week’s meetings could mean for the war in Ukraine. Rim Turkmani, director of the LSE Syrian Conflict Research Program, on what’s happening on the ground in Syria. Sexual abuse and harrassment lawyer Ann Olivarius breaks down the implications of Andrew Tate’s return to the U.S. LinkedIn co-founder Reid Hoffman on his new book “Superagency.”

LEARN MORE