Read Transcript EXPAND
CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR: In his new book, “Nexus, A Brief History of Information Networks from the Stone Age to AI,” bestselling author and historian Yuval Noah Harari looks at how we got here and what we need to do next in an age where artificial intelligence poses unique challenges. And he’s joining Walter Isaacson now.
(BEGIN VIDEOTAPE)
WALTER ISAACSON, CO-HOST, AMANPOUR AND CO.: Thank you. And Yuval Noah Harari. Welcome back to the show.
YUVAL NOAH HARARI, AUTHOR, “NEXUS”: Thank you. And it’s good to be here again.
ISAACSON: Your bestselling book, “Sapiens,” was about how we, meaning our species, homo sapiens, became dominant, and it’s mainly about how we were able to form networks of cooperation. Your new book, “Nexus,” is about communications and how they help to form those networks, but it’s rather pessimistic. I think you say, the way these networks are built predisposes us to use that power unwisely. Tell me about that theme.
HARARI: Well, the basic question of the book, of “Nexus,” is if humans are so smart, why are we so stupid? We’ve named our species homo sapiens, which means wise humans, and we know a lot more than any other animal on the planet. We’ve reached the moon. We can split the atom. We can decode and write DNA, and nevertheless, we are on the verge of destroying ourselves and much of the ecosystem. So, this is the paradox at the center off the book. And, of course, humans have been concerned with this paradox throughout history. And many mythologists and theologists, they blame human nature, that there is something wrong with human nature, which causes us to be self-destructive. The book “Nexus” gives a different answer. The problem is not with our nature, it’s with our information. If you give good people bad information, they make bad decisions. They make self-destructive decisions. And we are now seeing it all around us. You know, we have the most sophisticated information technology in history. And at the same time, we are losing the ability to talk with each other, to listen to each other. You know, there is maybe one thing that Democrats and Republicans in the United States can agree on, is that the democratic conversation is breaking down. Everybody accuses the other side, of course, but the basic fact is that this ability which sustains democracy to hold a reasonable conversation, it is breaking down at exactly the same moment that we have supposedly the best information technology in history.
ISAACSON: You say the flaws aren’t in our nature, it’s in our communications networks.
HARARI: Yes.
ISAACSON: What are the flaws in those communication networks?
HARARI: The basic misunderstanding is about what information does and what information is. Information isn’t truth. This naive view which dominates in places like Silicon Valley that you just need to flood the world with more and more information, and as a result, we will have more knowledge and more wisdom, this is simply not true because most information is junk. The truth is a very rare and costly kind of information. The basic function of information, in most cases, is not to reveal the truth. The basic function is to connect, to connect large numbers of people into networks. And the easiest way to connect large numbers of people is not with the truth, it is with fictions and fantasies and mass delusions.
ISAACSON: And what should be done about that? Should governments actually step in? We’re watching that happen a bit.
HARARI: Two things that should be done at the governmental level, and there is something to be done on the individual level. On the governmental level, the two most obvious things to do is to ban fake humans, that we don’t want algorithms pretending to be humans and thereby distorting our information systems. If you go online, let’s say to Twitter, and you see that the story has a lot of traction, a lot of traffic and you think, oh, a lot of humans are interested in this. So, I should also get involved, but actually it’s not human, it’s bots and algorithms. This should be banned. So, we shouldn’t have the situation when algorithms that pretends to be humans are running our conversations. The other thing is that corporations should be liable for the actions of their algorithms. Whenever you talk about it with the big tech companies, they immediately raise the flag of freedom of speech. We don’t want to censor our users. The problem is not with the human users. Humans produce enormous amounts of content, some of it is hate and greed, but there is also a lot of other good content. The problem is that the corporate algorithms of Twitter and Facebook and TikTok and so forth, they deliberately spread the hate and the fear and the greed because this is good for their business interests. And this is what they should be liable for, for the decisions and actions of their algorithms, not for the — what the human users are doing.
ISAACSON: You’ve talked about humans having some misinformation, and then you’ve talked about the way the algorithms work. Is there a difference between an organic information system, meaning human information system, and an inorganic one?
HARARI: Yes, there are many differences. One is that organic entities like us, like human beings, we work in cycles. There is day and night, winter and summer. Sometimes we are active. Sometimes we need rest. We need sleep. The algorithms are tireless. They never need to rest. They are not organic. And what we see in the world now is that they increasingly force us to work according to their pace. There is never any time to rest. The news cycle is always on. The markets are always on. The political game is always on. And if you force an organic entity to be on all the time, to be always active, always excited, it eventually collapses and dies. And, you know, the most misunderstood and abused word in the English language today is the word excited. A lot of people mistake the word excited for happy. Like they meet somebody, they say, oh, I’m so excited to meet you. Or like you publish a new book, oh, this is so exciting. Now, excitement is not always good. Excitement for an organic being, like a human being, means that your nervous system and your brain are very engaged, very active. Now, if you keep an organic system very excited, all the time, it breaks down, collapses, and eventually dies. And this is what happens now to democracies all over the world. This is what is happening to humanity. We are far too excited. We need time to rest. We need to kind of slow down. And because we give increasing control of the world to tireless non-organic algorithms that never need to rest and can just increase the excitement all the time, we are breaking down. We need more (INAUDIBLE) not excitement in politics, in economics, in many fields.
ISAACSON: When we talk about artificial intelligence and we say how it’s going to change the way we distribute information and either empower people or empower tyrannies, sometimes people reflect back on the last huge advance in information technology spread, which was Gutenberg 500 years ago doing the movable type printing press. You kind of debunk that. You say artificial intelligence is very insidious compared to the pretty good things that come out of the printing press.
HARARI: Well, there is a myth that, you know, Gutenberg brought print to Europe. And as a result, we got the scientific revolution and all the wonders of modern science. This is a very, very inaccurate, misleading view of history. Almost 200 years past from the invention of print until the flowering of the scientific revolution, during these 200 years, the main effects of print on Europe was a wave of wars of religion and witch hunts and things like that, because the big best sellers of early modern Europe were not Copernicus and Galileo Galilei, almost nobody read them. The big best sellers were religious tracts and were witch hunting manuals.
ISAACSON: But it also was the Bible, and that helped take power away from the Roman Catholic Church, and allow more individual religion.
HARARI: Yes, it certainly broke the monopoly of the Catholic Church, but again, not in favor of science, but in favor of more and more extreme religious sects. And you got, again, this wave of the walls of religion in Europe, culminating in the 30 years’ war, which was arguably the most destructive war in European history, at least until the two World Wars of the 20th century, for the same reasons that we see the spread of fake news and conspiracy theories and so forth right now. When you make the production of information easier, what you get is not necessarily facts, what you get is a lot of junk information, a lot of fake news and conspiracy theories and things like that. If you want the truth, it’s not enough to have a technology of — to produce information, you need institutions, costly institutions that make the effort to separate reliable information, which is rare, a kind of information from the flood of unreliable information. And in early modern Europe, it took 200 years to create such institutions, like newspapers, like scientific associations. You know, in scientific journals, they don’t run after user engagement. The algorithms today on social media are exactly like the first wave of publishers in the 15th and 16th century. In the 16th century, they also ran after user engagement. We want user engagement. And they discovered, in the 16th century, that if you produce a book by Copernicus with all these mathematical calculations about the movement of the planets, nobody buys it. It’s boring. But if you publish a witch hunting manual that tells you that there is a worldwide conspiracy of witches led by Satan, and they have orgies, and cannibalism and child sacrifice, and they try to take over the world, and some of your neighbors in the village, they are part of this conspiracy, and these are a few signs how you can identify these witches in your town, in your village and kill them, these were the big best sellers. And this led to this craze of the witch hunt, which was not a medieval phenomenon. In Medieval Europe, witch hunting was a very rare — I mean, people didn’t care so much about witches in Medieval Europe. The witch hunts were a modern phenomenon, ignited in part by the print revolution and by this flood of witch hunting manuals, which were good for user engagement and very bad for everything else.
ISAACSON: One of the things you say about artificial intelligence that makes it fundamentally different from every previous part of the information revolution is you say that artificial intelligence, A.I., are going to be full-fledged members of our information networks possessing their own agency.
HARARI: Yes.
ISAACSON: In other words, they’re going to have their own will. They’re going to decide what they want to do. Are they going to have consciousness? Are they going to have planning? Are they going to have free will?
HARARI: You don’t need consciousness and feelings in order to have goals and aims. When OpenAI developed a GPT-4 and they wanted to test what this new A.I. can do, they gave it the task of solving CAPTCHA puzzles. It’s these puzzles you encounter online when you try to access a website and the website needs to decide whether you’re a human or a robot. Now, GPT-4 could not solve the CAPTCHA. But it accessed a website, TaskRabbit, where you can hire people online to do things for you. And it wanted to hire a human worker to solve the CAPTCHA puzzle for it. Now, the human got suspicious. It wrote to GPT-4 online, what’s happening? Why do we need somebody to solve CAPTCHA puzzles for you? Are you a robot? The human asked, are you a robot? And GPT-4 said, no. I’m not a robot, I’m a human, but I have a vision impairment, which is why I have difficulty with these CAPTCHA puzzles, and this is why I want to hire you. And the human was duped and solved the CAPTCH puzzle for GPT-4. Now, GPT-4 has no consciousness, it has no feelings, it was — it didn’t feel anxious when the human kind of questioned it, it didn’t feel happy when it managed to fool the human. It was given a goal, and it pursued this goal by making up, for instance, excuses that nobody told it what to do. That’s the kind of really amazing and frightening thing about these situations. When Facebook gave the algorithm the aim of increased user engagement, the managers of Facebook did not anticipate that it will do it by spreading hate filled conspiracy theories. This is something the algorithm discovered by itself. The same with the CAPTCHA puzzle. And this is the big problem we are facing with A.I.
ISAACSON: You conclude, Nexus, your book, with a statement that, the decisions we all make in the coming years will determine whether summoning this alien intelligence, meaning A.I., proves to be a terminal error or the beginning of a hopeful new chapter in the evolution of life. So, I have that question, what do you mean by we? I mean, you said that it’s just in the hands of a very few people, how do we, as people who don’t run Twitter, Facebook, how do we get involved?
HARARI: Hey, it starts from voting for the right people in elections that will reign in the immense power of these tech giants who are not elected by anybody or not really accountable to anybody. And these crucial decisions about shaping the future of humanity needs to be made by people who represent the majority of us and not just by a few billionaires and engineers. Secondly, it’s choices we — each of us makes every day. The key thing is to avoid the trap off technological determinism. The idea that once you develop a certain technology, it can only go one way, and there is nothing for us to decide here. It’s never the case. Every technology can be used in a lot of different ways. You can use a knife to murder somebody or to save their life in surgery. In the 20th century, we saw that electricity and steam power and cars can be used to create totalitarian dictatorships or liberal democracies. It’s the same technology. This is also true in the 21st century with A.I. It has enormous positive potential to create the best health care systems in history, to help solve the climate crisis, and it can also lead to the rise of dystopian totalitarian regimes and new empires. And ultimately, even the destruction of human civilization. And the choice, which way it will go, it’s a choice that all of us need to take part in.
ISAACSON: Yuval Noah Harari, thank you so much for joining us.
HARARI: Thank you.
About This Episode EXPAND
Andrew McCabe, fmr. Deputy Director of the FBI reacts to the second assassination attempt on Donald Trump. NYT correspondent Thomas Gibbons-Neff speaks about his interview with the would-be assassin last year. US State Dept. Special Envoy James Rubin discusses the potential for foreign meddling in the 2024 US election. Yuval Noah-Hariri looks at AI in the context of history in his book “Nexus.”
LEARN MORE