Author Amy Webb on Her New Book, “The Big Nine”

Hari Sreenivasan sits down with futurist Amy Webb, who warns against the dangers of tech giants and their latest technology in her new book “The Big Nine: How the Tech Titans and their Thinking Machines Could Warp Humanity.”

Read Transcript EXPAND

CHRISTIANE AMANPOUR: Such a generous pair. And we now turn back to the future where artificial intelligence is re-weaving the fabric of our society. And our next guest says that we need to change patterns. Amy Webb is the founder of The Future Today Institute. Her new book “The Big Nine, How the Tech Titans and their Thinking Machines Could Walk Humanity” warns of a world with little choice and no control where wars are fought by computer code. She sat down with our Hari Sreenivasan.

HARI SREENIVASAN: You say in the book The Big Nine, this is nine companies that are pretty much in charge of most of the AI that’s interacting with us today. In the United States, you call it the G MAFIA, an acronym for Google, Microsoft, Apple, Facebook, IBM, Amazon, right? Those companies here and you say that really the concern that you have is that they’re all operating with market pressures and commercial realities. Why is that a problem?

AMY WEBB, FOUNDER, THE FUTURE TODAY INSTITUTE: The challenge with A.I. is that you have a relatively small group of people in a handful of companies in the United States building out these systems. And unfortunately, they don’t have the luxury of taking time to do risk modeling, an assessment to think about guardrails to sandbox and test things to make sure that they are safe or that they’re not going to evolve in ways that we haven’t yet seen, right?

SREENIVASAN: Their business models have been actually rewarded for the opposite.

WEBB: That’s right.

SREENIVASAN: Right. So go ahead. What was it, run fast, break things?

WEBB: Run fast and break things.

SREENIVASAN: But in the case of A.I., you’re saying this is much more dangerous than just the Internet stuff that we’re talking about now.

WEBB: I mean I would say so. And they’re publicly held companies so they have a responsibility to their shareholders. And unfortunately, our shareholders have less and less patience. A couple of weeks ago Google had an — as part of its earnings call, had to disclose its R&D spend which was high like the — it was very big and it meant that some of the margins in other places were going to be smaller and investors freaked out. Well, it’s not like the United States government has a giant pool of money funding basic science and technology research. We did years ago but we don’t have that now, which means that in the United States, the entire future of A.I. is being built by essentially six companies who have a responsibility to their shareholders, who don’t have the luxury to say let us keep our heads down and work on this without the expectation that there’s going to be a product at the end of it. And on top of all of that, there’s been no strategic direction under our current administration on A.I. We have no national strategy. We have no point of view. And A.I. is not a tech trend. A.I. is the next era of computing into which everything else is tied. So we have a situation in which there’s an antagonistic relationship between our Congresspeople and regulators and what’s happening in the Valley with Wall Street, for the most part, calling the shots. It’s a dangerous situation. During a time in which in China, the three predominant A.I. companies, the BAT, BIDU, Ali Baba, and Tencent are pretty much operating as public companies in lockstep with Beijing. And let’s not — so like — then you have to connect all of the other dots. Well, Xi Jinping, because of some changed regulations in China is effectively now president for life. And he is a smart guy, a smart guy who is good at aligning other leaders around him.

SREENIVASAN: The companies that are working in China right now, are they part of a larger geopolitical strategy that China has?

WEBB: That’s a good question. Nothing on paper would affirm that. However, from my vantage point, that is exactly what’s happening. So BIDU, Ali Baba, and Tencent, each focus on different aspects of A.I. Ali Baba is similar to Amazon. BIDU is similar to Google. And what does all of this have to do with people, why should anybody care outside of China? There’s the belt and road initiative. This is a diplomatic effort. China’s trading debt for infrastructure so it’s going in and building roads and building bridges. So that’s interesting and there’s about 60 countries now that are part of [13:45:00] this initiative. But it’s also a deeply funded digital initiative. So in addition to bridges and roads, China’s also laying fiber. It’s putting small cells all over the place to build 5G networks. And it’s also exporting some of its A.I. and some of its data collection techniques. But it’s China, which means that you are very much playing by the rules of the government.

SREENIVASAN: This being playing by the rules, there’s really almost a large scale experiment going on with the Social Credit System. What is it?

WEBB: Sure. So the Social Credit Score System is a way of collecting data to track how people are performing in society. So as an easy example, there’s a couple of provinces where this technology has been tested. If you jaywalk when the light is red, across the street, there are smart cameras that line the streets everywhere, you are automatically recognized. Your face is thrown up on a digital billboard so that everybody else can see that you’ve just broken the law.

SREENIVASAN: You’re the jaywalker?

WEBB: That’s right.

SREENIVASAN: Public shaming.

WEBB: Public shaming, so that’s a piece of it. You’re asked to report to a local police precinct. Usually, there’s a — it could be a fine levied. And social networks are used to tell your employer, your family members that you have gone against the grain of society and you’ve broken the law.

SREENIVASAN: This is happening now.

WEBB: That’s happening now.

SREENIVASAN: This is like a fight goes core that we wouldn’t understand when we get a mortgage. But this is much more related to every other part of your life.

WEBB: That’s right. So if you have a bad credit score, you’re going to have a hard time getting a car, you’re going to have a hard time getting a mortgage. People may pass judgment on you, right. You may have a hard time getting certain kinds of jobs. This is like that for every aspect of your life. So if you have a low score, your kids aren’t getting into the right schools. Now, again, you may ask yourself that’s fascinating. I don’t live in China so why do I care about this? This technology and this concept is already being exported. If you’re an authoritarian ruler —

SREENIVASAN: And you want to keep people in line.

WEBB: And you want to keep people in line, this is a pretty nifty way of making that happen.

SREENIVASAN: So you create the incentives and disincentives to figure out how you want to steer the population.

WEBB: That’s right. And if it’s the case that eventually all of these people and all of these countries have some kind of social credit score, isn’t it also plausible that the companies where they work will be granted some type of corporate credit score for the purpose of determining trade values and taxes and tariffs and things like that. So if that’s what’s happening, and we are not a part of that process as China creates a new world order using artificial intelligence, isn’t it plausible that as Americans we won’t have a Social Credit Score, the companies that we own or work for won’t have a corporate credit score, it may be impossible for us to do business in any of those countries.

SREENIVASAN: Unless we opt into this.

WEBB: Unless we opt in.

SREENIVASAN: You seem less concerned about the terminator scenario about robots coming around and deciding that we’re inefficient than you are about essentially an erosion of humanity. Because as you start talking about A.I. making choices, sort of what are the values that those choices are based on when they’re machines.

WEBB: So I’m a pragmatist. This is — this does not mean that I’m not concerned a little bit about a weaponized version of A.I. But the pragmatist in me is much more concerned about what’s happening now. And what’s happening now is that we are surrounded by systems that make millions of decisions on our behalf every day all day long. And the point of this is to optimize our lives. The challenge with A.I. optimizing our lives is that the people who built the systems to make — those people made determinations about what optimal is. And the people working on these systems, quite frankly, don’t really look like or represent everybody. And if you’re — if you don’t share the same world view as a critical mass of people, if you don’t have the same experiences, if, if, if, if, how can these systems that are designed to make optimal decisions for us all possibly reflect our own individualistic values? They can’t. Think of all of the different things in our lives that we are losing the ability to have any modicum of control over. All of us now are being nudged. Every time we send an e-mail or a text message, you see at the bottom of your phone —

SREENIVASAN: Suggested response.

WEBB: Suggested responses. My husband is a perfect example. So a couple of days ago, I texted my husband something. And he texted me something back and it was like a set phrase that I’ve literally, we’ve been married for 10 years, never once heard him say this. And it irritated me because I felt like he wasn’t communicating with me in a personal way.

SREENIVASAN: You weren’t even worth an actual response.

WEBB: That’s right. And —

SREENIVASAN: It was just a button because it was so easy.

WEBB: That’s totally right. So when he came home — so we have dinner that night and I was like, hey, like remember that text message from earlier —

SREENIVASAN: I wrote you back.

WEBB: Exactly. And I said did you type the response or did you just click the button? And he was like, well, I clicked the button. It meant the same thing. But the problem is it doesn’t mean — I mean it literally means the same thing but it isn’t like mean the same thing. And I think we don’t recognize just how much of our lives are already being optimized by a handful of people working in Silicon Valley. In ways that over time — this may seem insignificant now but over time, I think they have the potential to change the fabric of society and how we relate to each other.

SREENIVASAN: You say in the book you have one optimistic one. The best case scenario, you have a pragmatic one and you have kind of a not so great one. Considering where we’re at today, what is the more likely one that’s going to happen?

WEBB: What I think is likely is somewhere between the pragmatic and the catastrophic scenarios in which we slowly lose control over the ability to make decisions. We see more and more consolidation. We have less choice ironically. We find that we are being nudged continuously. And being nudged by our technology continuously not only makes us miserable but we also start to forget that we have some agency and how to do things that sort of encourages a mental laziness that spills over into other areas of life. And slowly but surely, China builds an arsenal of code as it deploys this new kind of diplomacy. And we find that the world is divided in new and quite uncomfortable ways. And we’re looking at future wars fought in code rather than combat. And we know when the first shots have been fired, when our lights start flickering intermittently, we get locked out of our smart microwaves. And we find that our lives are difficult because somebody has decided to restrict them. I mean that — I think what people have in their heads is some kind of event horizon where the machines wake up and then they come to kill us all. That would certainly be horrible. But so would living in a world where we’re in a country like America which is built on individual freedoms, we find that we are bound by restrictions in all of these different ways. And systems are making decisions about us and for us in ways that are completely unintelligible. That to me would be in a way worse. A single event horizon, a single bomb that drops, that destroys – obviously, that’s bad. But isn’t it almost worse to live through generations of transition away from freedom where we are right now to resolute control? To me, that’s worse. OK. So it’s — I know. I know. It’s — so — but there — my point with all of this is I know the catastrophic scenario is bleak and I know that it has scared a lot of people who read early versions of it. But my purpose for doing this is to change our developmental path. So there is —

SREENIVASAN: And we have agency now that we could do something about it to get off that track.

WEBB: That’s right. So if we stop thinking about artificial intelligence as a great way to make money fast, right, or some kind of amorphous way to ensure our human longevity far into the — like — or sci-fi or whatever it is, if we can get down to brass tacks and again like start making smarter decisions in a collaborative way, I think we have a real opportunity to make good on some of those promises. But the way to do that is to treat artificial intelligence as a public good. It’s something we all have a stake in like the air, right. And for some — to be something that we all — we reap the rewards and benefits from but we also protect.

SREENIVASAN: Amy Webb, thanks for joining us.

WEBB: Thank you.

About This Episode EXPAND

Christiane Amanpour speaks with Rear Admiral David Titley about why climate change is a security risk; and actors Julianne Moore and John Turturro discuss their film, “Gloria Bell,” about divorcees looking for love.
Hari Sreenivasan speaks with author and futurist Amy Webb about her new book, “The Big Nine.”