01.30.2025

What Does China’s DeepSeek AI Mean For U.S. National Security?

Read Transcript EXPAND

CHRISTIANE AMANPOUR, CHIEF INTERNATIONAL ANCHOR:  The battle for supremacy in the field of A.I. is well and truly on. The recent bombshell about China’s breakthrough technology DeepSeek had an immediate and negative effect on America’s tech world. Cheaper and faster than U.S. technology, Trump called it a wakeup call for the American industry. But how far ahead is China in this A.I. race? Former top cyber security official, Anne Neuberger, joins Walter Isaacson to discuss the competition and its impact.

(BEGIN VIDEOTAPE)

WALTER ISAACSON, CO-HOST, AMANPOUR AND CO.: Thank you, Christiane. And, Anne Neuberger, welcome to the show.

ANNE NEUBERGER, FORMER DEPUTY NATIONAL ADVISER, CYBER AND EMERGING TECH: Thank you, Walter. It’s great to be here with you.

ISAACSON: The big news this week is the Chinese company’s release of something called DeepSeek, an A.I. system that is like Grok A.I. or ChatGPT but much, much more efficient, it uses less electricity, was able to get around the restrictions we put on microchips and microprocessors. I just downloaded it on my phone. And one of the things it said was, the personal information we collect from you may be stored on a server located outside of the country where you live. What does that mean for Chinese espionage and intelligence gathering?

NEUBERGER: You know, DeepSeek did highlight that, it’s not enough to be better in A.I., it also matters the cost, the efficiencies of running those models. A.I. will be a game changer for intelligence because it allows for more faster and better intelligence. I’ll give you three practical examples. One in the area of missile defense, a second in the area of counterterrorism, and a third, that goes exactly to the question you raised, with regard to gathering personal information and making sense of that for an intelligence agency. So, in the area of missile defense, you know, missile launchers are often mobile. So, it makes it hard to rapidly detect the potential for a missile launch in time to warn missile defense systems. A.I. could be a game changer in allowing for bringing together images, for example, those captured by satellites of known or suspected missile launch sites, along with signals of people who work in those programs to rapidly detect the potential for a launch and tip the defender. Similarly, in the area of language, A.I. is trained on multiple languages, could know the nuance between dialects, and also potentially determine which are coded words or cultural words to help identify plots. And then, finally, in the area in the question you asked, when you think about a government like China collects vast amount of data sets, whether that’s off models running on your phone or off the millions of cameras deployed in China and around the world, China can make sense of that in order to drive surveillance. If they’re interested in a particular individual, if they’re interested in a particular individual because of their role, because of the knowledge they play, or because key data sets like research and development in drug industries are interesting to train models for their own advancements.

ISAACSON: Marc Andreessen, the venture capitalist who’s backed a whole lot of these companies and actually was one of the founders of Netscape, said, this is a Sputnik moment for American A.I., referring to the Russian satellite that went up in 1957. What do you think about that? Is this a wakeup call for all of us?

NEUBERGER: It is indeed. It’s a wakeup call for America that A.I. is a race we can lose. We have the innovation engine today. We have the edge. But it’s not enough to produce the best A.I. technology. The key is also to produce the most efficient, the cheapest to deploy in fields from drug discovery to quantum error correction.

ISAACSON: Biden had an executive order on A.I. and then some guardrails around A.I. more than 100 pages that explain what could be done and not done, what intelligence could be gathered. Did that slow the U.S. down and give China the advantage to leap ahead this way?

NEUBERGER: You know, that’s a point was a great deal of nuance. What it likely did was make it harder for the Chinese and force them to innovate in the way they did with DeepSeek. Indeed, the CEO of DeepSeek noted that the Biden administration’s compute controls actually was a challenge for them. But as a result, they innovated. They integrated across the hardware, the algorithms, the architecture, as well as figuring out ways to more efficiently run these models, which is key.

ISAACSON: Well, wait, doesn’t that mean it was a bad idea for us to do that? Because now they’ve innovated and become much more efficient.

NEUBERGER: Compute does make a difference. It’s just not the be all or end all. So, certainly, the controls did slow down the Chinese, but as a result, they had to innovate, had to come up with ways. And frankly, this is a call to action for our A.I. companies to also do the same innovation. So, in addition to compute data, high value data matters a great deal and letting 1,000 flowers bloom in terms of different ways to innovate on A.I. When I talk about high value data, you know, there are some, for example, who believe that DeepSeek may have generated high value data to train their model using U.S. A.I. models, and that matters a great deal. So, as we think about the future, thinking about protecting high value data sets and building that into the U.S. approaches on A.I. will be important.

ISAACSON: The Chinese have done it with an open-source model. Did that give them an advantage?

NEUBERGER: It certainly gives an advantage because open-source, the model weights are open, the data sets they trained on are not, but by having cheaper models available, more of an ecosystem can be built on it. So, think about Meta’s Llama, also an open-source model. Why that matters is app developers who then use that model, whether they’re using it to develop customized ways for kids to learn or whether they’re using it to train models on particularly different kinds of languages or sensitive data sets. So, open-source, because it’s cheaper, does allow more users to build on it. That’s a good thing. There’s a lot of positive applications. It’s also a concern, because these powerful A.I. capabilities could be in the hands of potentially malicious users. You know, Walter, one of the toughest issues I dealt with when I was at the White House were cyber criminals developing, locking up American hospitals, developing cyber capabilities to lock up those hospitals for ransoms. Certainly, A.I. can make it easier to build longer lasting and more focused offensive cyber capabilities by a range of users.

ISAACSON: When we talk about open-source, we mean, of course, that the source code is public. People can change it and build upon it as opposed to, let’s say OpenAI, which was supposed to be open-source, but now is closed with its ChatGPT. Is there something the U.S. should try to do in regulating open and close, or is that something that can’t be regulated?

NEUBERGER: You know, Walter, the way we win is on having the best tech. And as we’ve seen the lessons learned from DeepSeek, the cheapest tech to run. And, you know, by regulating that becomes a whack-a-mole, you regulate this and something else pops up, as you know. So, generally, our approach should be to learn from this, to figure out how do we best innovate and to see that it’s not enough to build potentially the best A.I. tech, it really does matter what it costs for users to use it and the compute power to actually run those models in their applications.

ISAACSON: One of the things where it really crops up and we can all feel it is facial recognition, which is if I walked around here in New Orleans, there are all sorts of cameras and doorbell cameras and cameras on the streets, and it wouldn’t usually matter. But in China now, they can recognize individuals and use that facial recognition as part of their espionage on their own citizens. Is that something that could be and should be done in the United States?

NEUBERGER: So, first, your point’s well taken that China uses their millions of cameras deployed in China and, frankly, around the world to train very sophisticated facial recognition models. And that’s a real concern, as you asked me earlier, regarding the race between intelligence of authoritarian and democracy — democratic countries, China’s ability to do that will make U.S. intelligence operations harder. It puts U.S. military officers and others at greater risks. In the United States, the challenge for us is deploying artificial intelligence in a way that’s in line with the values and laws of a democracy. You know, Walter, I saw this firsthand. I led at the National Security Agency. I served as NSA’s first chief risk officer following the Snowden revelations, and I saw the loss of trust by citizens in our democracy, in intelligence services, our citizens, our allies, and the divisiveness between government and the tech sector that developed after that. That must be avoided for the U.S. in terms of how the U.S. government uses A.I. in sensitive national security applications, and it can be avoided.

ISAACSON: How do we overcome that, or should we? Should we be a little bit resistant to overcoming that lack of trust?

NEUBERGER: We overcome it through transparency, through talking about how A.I. is being used in sensitive national security applications, and in talking about how the laws and values of the country are being implemented. The challenge is technology moves far faster than law and policy. So, we need to ensure that as the Intelligence Community, as the military is deploying A.I., they are explicit, they explain with transparency about how they’re doing so, how they keep a human in the loop for certain kinds of decisions, how they validate the data sets that are being used, where the protections of American civil liberties and privacy and others are baked in when data sets are created. All of that is key to how A.I. is deployed for sensitive national security applications.

ISAACSON: You talk about all the restraints and the civil liberties constraints that need to be in a democracy, transparent and put on it. To what extent does that handicap us in competing against China, which does not have those restraints?

NEUBERGER: It certainly makes it harder and potentially makes us slower in deploying A.I. applications for sensitive uses. But I would argue that who we are as a democracy, our citizens’ confidence that their civil liberties and privacy are protected is an important consideration in deploying artificial intelligence, and we can both use A.I. in military applications. Some of those, for example, that I talked about while also retaining the confidence of our citizens and our allies that we’re doing so in line with our laws and our values.

ISAACSON: In your Foreign Affairs piece, how A.I. will remake espionage, let me read you a quote. You say, with an A.I. race underway, the United States must challenge itself to be first, to benefit from A.I., first to protect from enemies who might use the technology for ill, and first to use A.I. in line with the laws and values of democracy. Do you think that A.I. will help defend democracy, or is it a real threat to us?

NEUBERGER: A.I. has tremendous promise, helping intelligence services make sense of the vast amounts of intelligence collected, the vast amounts of data collected and make sense of that more quickly and in a way that’s more relevant for policymakers. A.I. will also be major benefit economically. It also represents a risk for the quote you just said, because our adversaries will use it as well. Both adversaries, authoritarian countries like China, aggressive countries like Iran, but also a whole host of potentially terror groups and cyber criminals, and that’s why we want to do both, deploy, to protect our country and also protect ourselves from potential adversaries using the technology for ill. We must do both. Keep an eye on how we quickly deploy and also keep an eye on what adversaries are doing and ensure we’re protected, including protecting the sensitive A.I. models that could be driving, for example, how we control large numbers of unmanned drones, how we control large numbers of unmanned ships in, for example, a crisis or conflict scenario.

ISAACSON: If we use A.I. to command the drones, as you said, or command the ships, there’s always been a rule in the military that there’s a human in the loop. Does that disadvantage us in not being as fast as our adversaries could be?

NEUBERGER: A.I. is really key in that it will allow us to be more precise. For example, when we think about a potential conflict in the South China Sea of a vast Pacific Ocean and being able to detect ships, being able to detect a particular aggregation of ships approaching saying, what are those ships? What kind of ships? What is the pattern analysis to say is this an offensive or is this a surveillance mission or is this a potential defensive mission? So, A.I. will be critical in informing humans and in making humans more effective in military and intelligence operations. And that’s a decision that the national security community has made to ensure that A.I. informs humans, makes humans faster in making decisions, more accurate in making decisions in those kinds of sensitive scenarios.

ISAACSON: You point out that a major distinction between what’s happening in China with DeepSeek and other things versus what’s happening in the U.S. is that China both has government run programs and the companies in China have to share things with the government. You were on all sides — looking at all sides of that issue before. U.S. companies, such as Google, such as Meta, sometimes share data with the U.S. and sometimes resist it. Do we need to change the way we operate so that the U.S. government can have a better partnership with the private companies developing A.I. here?

NEUBERGER: The private sector is the engine of American innovation, particularly in artificial intelligence. And when we think about how the U.S., as the competition between our democracy and authoritarian governments like China heats up in terms of competing for the future of the world order, we know that the U.S. is world class private sector and innovation is a part of how we compete on that global stage. What that partnership looks like will need to evolve. For example, how the Intelligence Community uses Private sector A.I. models to leverage what they know and then further train them with classified information to ensure that the insights the Intelligence Community is providing really integrates across both.

ISAACSON: When you were in government and the Edward Snowden revelations came along, it was companies like Google and Amazon and Meta had been cooperating with the U.S. government. There was some outrage that maybe some of that data was secretly given. to the U.S. government, and then that stopped. At the inauguration of President Trump, we saw the leaders of these companies, whether it be Mark Zuckerberg at Meta or Jeff Bezos, really trying to forge a, say, partnership with the Trump administration. Do you think that will affect how big corporations use data when the United States Intelligence Community wants it?

NEUBERGER: So, first, there are real value in A.I. in terms of our economic strength, our economic growth, and deploying A.I. across a range of industries to drive American innovation and advancements, separate and apart. Coming to your question, the laws that govern how those companies share information with the U.S. Intelligence Community, with law enforcement, are clear. There was a great deal of increased transparency, as you mentioned, post-Snowden, to help the average American citizen understand those laws. And there’s a real lesson there. Because until the Snowden revelations, U.S. intelligence operations were a black box. The average citizen, the average ally, did not understand how American tech companies, what information they were providing, and in response to what standards of law and civil liberties and privacy protections. So, there was a lot of work done following Snowden to rebuild that trust. The lesson there is that one cannot allow this partnership between this critical engine of this country’s innovation and the U.S. government to be in a black box. As the U.S. national security community deploys A.I. in sensitive national security applications, in intelligence, and military applications, explaining that transparently, ensuring the tech companies understand that, ensuring the average American citizen understands that will be key to avoid that breakdown. And even more importantly, to ensure that as we seek to compete with authoritarian models where that tie is very clear and unconstrained by law — by democratic laws and values, we also compete on who we are in that way.

ISAACSON: Anne Neuberger, thank you so much for joining us.

NEUBERGER: Thank you for having me, Walter.

About This Episode EXPAND

Aviation expert Miles O’Brien discusses the plane crash in Washington DC and the recovery effort underway. Former EPA Administrator Gina McCarthy discusses Donald Trump’s executive orders relating to climate. Jessica Hecht and Bill Irwin tell the story of a public health crisis in their play “Eureka Day.” Fmr. NSA official Anne Neuberger explains China’s DeepSeek AI and what it means for the U.S.

LEARN MORE