08.16.2024

CEO of Google DeepMind: We Must Approach AI with “Cautious Optimism”

Read Transcript EXPAND

BIANNA GOLODRYGA, ANCHOR: Well, next to something that has the potential to influence several of the issues we have discussed in the show so far, from climate change to presidential elections, and that is artificial intelligence. Demis Hassabis is the co-founder and CEO of one of the world’s leading A.I. research groups, Google DeepMind. And he tells Walter Isaacson why he takes a cautiously optimistic approach to the much-discussed technology.

(BEGIN VIDEOTAPE)

WALTER ISAACSON, CO-HOST, AMANPOUR AND CO.: Thank you, Bianna. And, Sir Demis Hassabis, welcome to the show.

DEMIS HASSABIS, CO-FOUNDER AND CEO, GOOGLE DEEPMIND: Thanks for having me.

ISAACSON: We now — I think you’re in your London office there. And behind you probably is that wonderful first edition of Alan Turing’s 1950 paper in which he asks — proposed to address the question, can machines think. Now, we’ve got a lot of large language models such as Google Gemini, which you helped create and ChatGPT from OpenAI. How do we get from a chatbot that kind of can pass a Turing test, fool a person into thinking it’s human, to something that’s really serious, like Artificial General Intelligence, AGI, what you call the Holy Grail?

HASSABIS: Yes. Well, look, it’s a great question. And of course, there’s been unbelievably impressive progress and fast progress in the last decade. Plus, as you say, getting towards systems that we have today that can pass a Turing test. But it’s still far from general intelligence. What we’re missing is things like planning and memory and tool use. So, they can actively, you know, solve problems for us and actually do tasks. So, right now, what we have is kind of passive systems. We need these active systems.

ISAACSON: Wait, explain to me what planning is. I know you and I do it. How does a machine do it?

HASSABIS: Well, we’ve experimented a lot in the past with planning, actually using games. So, one of our most famous programs back in 2016 was AlphaGo, which was the program that we built to beat the world champion at Go, the ancient game of Go. And it involves building a model of the board game and what kinds of moves would be good. And then, on top of that’s not enough to play really well. You also need to be able to try out different moves sort of in your mind, and then plan and figure out which one, which path is the best path. And so, today’s models don’t do that, the language models. And really, we need to build that planning capability, the ability to break down a task into its subtasks and then each one in the right order to achieve some bigger goal. They’re still missing that capability.

ISAACSON: Tell me why the use of games is so important to the development of artificial intelligence.

HASSABIS: Yes, games, well, got me — it was what got me into artificial intelligence in the first place. It was playing a lot of chess for the England junior teams and then trying to improve my own thought process has led me to thinking about mechanizing intelligence and artificial intelligence. And so, we used games when we started DeepMind back in 2010 as a testing ground, a proving ground for our algorithmic ideas and developing artificial intelligence systems. And one reason that’s so good is because games have clear objectives, you know, to win the game or maximize the points that you can score in a game. So, it’s very easy to sort of map out and track if you’re making progress with your artificial intelligence system. So, it’s a very convenient way actually to develop the algorithmic ideas that, you know, now underpin modern A.I. systems.

ISAACSON: I think most of us have now used the chat bots like Gemini or ChatGPT. But you’ve talked not only about moving us to artificial general intelligence. In other words, the type of intelligence that can do anything a human can do. But also, I guess I’d call it real world intelligence, you know, robots or self-driving cars, things that could take in visual information and do things in the physical world. How important is that? And how do you get there?

HASSABIS: Yes, it’s incredibly important. I think this idea of embodied intelligence is sometimes called and, you know, self-driving cars are an example of that and robotics is another example, where these systems can then actually interact with the real world, as you say, the — you know, the world of atoms, so to speak, and not just be stuck in the world of bits. So, that’s going to be huge advances I think we’re going to see in that space in the next few years. And, you know, that’s also going to involve this planning capability and the ability to sort of do actions and carry out plans that in order to achieve certain goals. And that’s not the only area of real world, I would say application, the one other area that I’m super passionate about and the reason I have spent my whole career building A.I. is to apply A.I. to science, scientific problems, scientific discovery and, you know, including our program AlphaFold that cracked the ground challenge of protein folding.

ISAACSON: Yes, tell me a little bit more about AlphaFold, because what it can do is understand RNA, DNA, all these things that we think determine what a protein looks like. But actually, it’s the folding of the protein. How important and hard was that? And what is it going to do for us?

HASSABIS: Well, the protein folding problem is a 50-year kind of grand challenge in biology. One of the biggest challenges in biology was sort of proposed in the 1970s by a Nobel Prize winner, Anfinsen. And the idea was that, can you determine the 3D structure of a protein? You know, everything in life depends on proteins, all your muscles in your body, everything, all the functions of your body are governed by — supported by proteins. And what a protein does depends on its 3D shape, how it folds up in the body. And the conjecture was, could you predict the 3D shape of a protein based just on its two-dimensional — sort of one-dimensional genetic sequence, right? So, just a string of numbers. Sometimes it’s called the amino acid sequence. And can you predict the 3D structure of the protein just from its amino acid sequence? And if you could do that would be really important for understanding biology and the processes in the body, but also, designing things like drugs and cures for diseases and understanding when something goes wrong and how to design a drug to bind to a certain part of the protein. So, it’s a really foundational, fundamental problem in biology. And we managed to pretty much sort of crack that problem with AlphaFold.

ISAACSON: There’s so many large language models competing. It’s almost like a racetrack in which Google Gemini, yours is up there against OpenAI and against Grok A.I., Grok from xAI, and Meta, I think, has its own in Anthropic. One of the things that seems to distinguish the latest model of Google Gemini is that it’s multimodal, meaning it can look at images, it can hear words, not just deal with text. Explain that to me, and if that’s a differentiator.

HASSABIS: Yes, that was one of the key things we did when we were designing our Gemini system was to make it, as you said, so-called multimodal from the beginning. And what that means is it doesn’t just deal with language and text, but also images and video processing and code and audio. So, all the different modalities we as human beings sort of use and exist in. And we’ve always thought that was critical for the A.I. systems and models to be able to understand. If we want them to understand the world around us and build models of the world and how the world works and be useful to us as perhaps digital assistance or something like that, they need to really have a good grounding and understanding of how the world works. And In order to do that, they have to be multimodal. They have to process all these different types of information, not just text and language. And so, we built Gemini from the beginning to be natively multimodal. So, it was had that ability from the start. And we were envisaging things like, you know, a digital assistant, a universal assistant that can understand the world around you and therefore, be much more helpful. But also, if you think about things like robotics or anything in the operating in the real world, it also needs to interact with and deal with real world problems. Things like spatial relations and context that you’re in. So, we think it’s kind of fundamental for general intelligence.

ISAACSON: The big news in the past week or two was Meta, the Facebook, coming out with Llama. It’s form of a competitor in some ways to Google Gemini and OpenAI’s system. And Mark Zuckerberg, when he introduced it, made a big deal about it being open source. You’ve been on — you know that debate better than anybody. Tell me why the full-fledged Google Gemini is not open source and whether Mark Zuckerberg is right to say this is important.

HASSABIS: It’s definitely very important. We’re huge — Google DeepMind and Google, in general, are huge supporters of open source software. We’ve put out — I mean, we were just discussing AlphaFold earlier. That is open source, you know, over 2 million biologists and scientists around the world make use of it today. And, you know, pretty much every country in the world to do their important research work. We’ve — we published, you know, thousands of papers now on all the underlying technologies and architectures required for building modern A.I. systems, including most famously the transformers papers, that is the architecture that underlies pretty much all the modern language models and foundational models. So, we very much believe in that’s the fastest way to make scientific progress is to share information. That’s always been the case. That’s why science works. Now, in this particular case with AGI systems, I think we need to think about as they get more powerful. So, not today’s models. I think that’s fine. But, you know, as we get closer to artificial general intelligence, you know, what about the issues around bad actors, whether that’s individuals or up to nation states, using these things, repurposing these same models, their dual purpose, they can be used for good, obviously, that’s why I’ve worked on A.I. my whole career is, you know, to help cure diseases and maybe help with things like climate change and so on and advanced science and medicine. But they can also be used for harm if incorrectly used by bad actors. So, that’s the question, I think, the — that we’re going to have to sort of resolve as a community and a research community is, how do we enable all the amazing good use cases of A.I. and share information amongst well- meaning actors, you know, researchers and so on to advance the field and come up with amazing new applications that are benefiting humanity? But at the same time, restrict access to would be bad actors to do harmful things with those same systems by repurposing them in a different way. And I think that’s the conundrum that there’s — we’re going to have to sort of solve somehow with this debate about open systems versus closed systems. And I don’t think there’s a clear answer yet or consensus about how to do that as this, as these systems improve. But of course, you know, I congratulate Mark Zuckerberg and Meta on, you know, their great new model. And I think this is a useful to stimulate the debate on this topic.

ISAACSON: One of the things that can make an A.I. system really great is the training data that it can use. And you’re at Google, you own YouTube, this show will be on YouTube, our segment pretty soon. Is Google Gemini — it trains on YouTube, unless somebody stops it. It also can train on my books. It could read any book I wrote. What is to — how do we regulate that Google Gemini can’t just take all this data and intellectual property without some deals?

HASSABIS: Yes, we’re very — well, we’re very careful at Google to respect all of those kind of copyright issues and to only train on the open web, whether that’s YouTube or the web in general. And then, obviously, we have content deals as well. And so, you know, this is going to be an interesting question as well for the whole industry and the whole research industry is how to tackle this going forwards. We also have a Google opt outs to allow any websites to opt out of those — of training if they want to do that. And many people take advantage of that. And then, in the fullness of time, I think we need to develop some new technologies where we can do sort of attribution or some form of, you know, this input, training input helped in some fractional way, some output, and then derive some commercial value from that can flow back to the content creators. I think that technology is not there yet, but I think we need to develop that. You know, analogous to that would be content ID for YouTube. That’s YouTube has had for many years and runs very well in order for the creator community as well to benefit massively from the distribution that YouTube gives. And I think that’s a good example that we’re trying to follow, you know, with the — in the A.I. space, you know, as an example, like the way YouTube is — the YouTube ecosystem is developed.

ISAACSON: In the fascinating biography of your life, there’s something almost as important as being a game player in a game designer, and that’s you have a PhD in cognitive neuroscience. You love the human brain. How important is it to understand how the human brain works in order to do A.I.? And is there something that’s always going to be fundamentally different between a silicon based digital system and the wetware of the human brain?

HASSABIS: Yes, you’re right. So, I did my PhD, you know, nearly 20 years ago now in the mid-2000s. And I think back in those times in the early parts of DeepMind, early 2010s, it was very important to have inspiration from both from machine learning and mathematics, as well as neuroscience and inspiration for the human brain as to how intelligence might work. So, it’s not that you want to slavishly copy how the brain works, because as you pointed out, the brain — our brains are carbon based and our computers are silicon based. So, there’s no reason why the mechanics should work in the same way. And in fact, they work quite differently. But a lot of the algorithmic principles and the systems and the architectures and the principles behind intelligence are in common, including, you know, in the first — the early days of neural networks, you know, the things that underpins all modern A.I. were originally inspired by neuroscience and synapses in the brain. And so, the implementation details are different, but the algorithmic ideas were extremely valuable in terms of kickstarting what we see is the modern A.I. revolution today including this idea of learning systems, reinforcement learning and systems that learn for themselves, very much like biological systems and our own brains do. And then ultimately, you know, maybe when we build AGI, we’ll be able to use that to back — to analyze our own minds so that we can understand the neuroscience better and finally understand, you know, the workings of our own brain. So, I love the kind of whole circle here of — this kind of virtuous circle of them influencing each other.

ISAACSON: Here’s something you’ve said, quote, mitigating the risk of extinction from A.I. should be a global priority. What are those risks?

HASSABIS: Look, I think that was a sort of open letter that I and many others signed, and I think it was important to put that in the Overton window of things that need to be discussed. You know, I think nobody knows the timescales of that yet or the worries of that. I think we’re still — the current system is impressive, though they are still quite far from artificial general intelligence. And also, we don’t know what the risks levels are of that. Look, maybe it’ll turn out to be very simple to navigate, you know, controllability of these systems. How do we interpret them? How do we make sure when we set them goals, you know, these more agent-based systems, they don’t go and do something else on the side that we didn’t intend, unintended consequences? You know, there’s lots of science fiction books written about that. Most of Asimov’s books are about those kinds of scenarios. So, we want to avoid all those things so that we make sure we use systems for good and for amazing things, you know, solving diseases, you know, helping with climate, inventing new materials, all these incredible things that I think are going to come about in the next decade or so. But we need to understand these systems better. And I think over that time, we’ll also understand the risks involved about runaway systems that are doing unintended consequences or bad actors using these systems in nefarious ways. You know, that may end up all to be very low probability likelihood, and let’s hope that’s the case. But right now, it’s — there’s a lot of uncertainty over it. So, as a scientist, you know, the way I deal with that, I think the only sort of responsible approach to that is to approach with cautious optimism. So, I’m very optimistic, obviously, that we’ll — you know, human ingenuity collectively will work this all out. You know, I’m very confident of that. Otherwise, I wouldn’t have started this whole journey 30 years ago for myself. But, you know, it’s not a given, right? So, there are some unknowns that we need to do research on and focus on to understand, and things like analysis of the system. So, they’re not just black box systems that we actually understand and can control and look at how knowledge is represented in these systems. And then, we’ll be able to understand the risks and the probability of those risks and then mitigate against those. So, really, it was just a call to action to pay more attention to that, as well as all the exciting commercial potential that everyone’s wrapped up in. We should also think, at the same time, about the risks. But, you know, still be optimistic about that, but approach it with the respect that it deserves for such a transformative technology that A.I. is.

ISAACSON: Sir Demis Hassabis, thank you for being with us.

HASSABIS: Thanks for having me.

About This Episode EXPAND

Former White House National Climate Adviser Gina McCarthy discusses a summer filled with extreme weather and silence on the subject from U.S. presidential candidates. Journalists Caitlin Dickerson and Lynsey Addario talk about their reporting on migrants as they follow them through the lethal Darién Gap route. CEO of Google DeepMind Demis Hassabis on the promise and peril of AI discoveries.

LEARN MORE