Read Transcript EXPAND
PAULA NEWTON, ANCHOR: So, now to artificial intelligence, which, of course, is the next frontier of the technological revolution. But as it continues to evolve at breakneck speed, we add, how can we ensure it’s safe, ethical, and accessible for all to use? To answer this, Hari Sreenivasan spoke with Carme Artigas, co-chair of the U.N. Artificial Intelligence Advisory Body.
(BEGIN VIDEOTAPE)
HARI SREENIVASAN, CORRESPONDENT: Paula, thanks. Carme Artigas, thank you so much for joining us. You are a co-chair of this U.N. A.I. advisory board and you’ve published this final report. What’s the top line? What are the findings that you’re most interested in making sure that people are aware of?
CARME ARTIGAS, CO-CHAIR, U.N. ARTIFICIAL INTELLIGENCE ADVISORY BODY AND SPANISH SECRETARY OF STATE FOR DIGITALIZATION AND A.I.: Yes. First of all, I think we are all aware of the great possibilities that artificial intelligence is going to reach humanity in terms of efficiency, in productive processes, of course, opportunity to spread public health or education, and of course, on scientific research. And we are all aware that these are great possibilities, but at the same time, there are a lot of risks. At the short-term in terms of fundamental values, but also in the long-term, in terms of safety. What we need to ensure is that all these opportunities are absolutely developed. And if we leave all this technology, which is very transformative and governed, not only we are not be able to capture all these opportunities, but probably are going to exacerbate some of the problems we have today, especially talking about inclusiveness.
SREENIVASAN: You know, when you talk about lack of inclusiveness, the report points out, in several different ways, the sort of giant gaps there are just in how unequal the distribution of where artificial intelligence is today. One of the things that you pointed out is seven countries are party to all the different kind of A.I. governance efforts that are happening around the planet, and 118 countries are part of none of them. So, is there a risk here that the rest of the world, meaning the majority of the world, gets left behind?
ARTIGAS: Yes, it is. I mean, in fact, there is a great risk to increase the current digital divide with a new A.I. divide. And I think that we must ensure that the benefits and the cost of any technology revolution is equally distributed among different social classes, among different countries. The reality is that even though there are a lot of very important international efforts for government in terms of ethics guidelines, even regulations in some parts of the world, we cannot leave all these countries without being sitting at the table, without being part not even in the development, but also already in the discussion. So, we want equality in the benefits. We need to ensure equality in the access. And to ensure equality in the access, we need to make them participate in all these new instruments we are proposing to ensure that A.I. is government — governed at the global level. And I also provide with this less development countries with the right tools they need to develop their solutions, especially when we think that A.I. is going to be fundamental in the achievement in sustainable development goals. And I mean, by these are three main, I would say, entities that are needed, which is data, computing capabilities, and talent. And that is one of the proposals we have is on a capacity development network and a capacity building also funded by a global (INAUDIBLE).
SREENIVASAN: You know, some of this comes down to the kind of computing power and where that computing power is located. And right now, what you point out is that there is almost no computing cluster, all the 100 biggest computing clusters in the world, none of them are in developing countries at all, right? So, if the physical horsepower that’s necessary to enable the talent in a smaller country to try to build applications on A.I., et cetera, just doesn’t exist there, how do we even begin that?
ARTIGAS: Exactly. That’s the right question. We don’t have access to that computing capabilities. And this why in the capacity building network initiative, we propose a global fund that can be funded by private and public entities, but also not only in money, but also in kind. We need to provide the capacities to build their own entrepreneurial ecosystems that these countries need. That’s what’s also proposed a data framework, because most of the problem is that all the special and large language models, general purpose A.I. systems and models that are being developed by the Global North, they are only developed by data from the Global North. So, there is a lack of representation, and therefore, we cannot pretend that this a universally adopted technology that can benefit all, which is (INAUDIBLE).
SREENIVASAN: You know, just the other day we saw that there was an investment between BlackRock and Microsoft and they want to put $30 billion down to co-invest in data centers, right? They even have NVIDIA as a partner. But most of that is America centric. And I wonder, how does, kind of — how do the suggestions that you’re making here, do you pick up the phone and call Satya Nadella and say, hey, listen how about a couple of those data centers in a couple of other countries that could use it?
ARTIGAS: Well, we’re talking a problem that has a lot to do with geopolitics, and of course, we’re not messing into that. What we see is what are the gaps, what are the instruments that need to be set, just where is the place where all these conversations need to take place? The point is that we don’t have yet a multilateral platform for collaboration, for example, on safety. Safety is very important for A.I., to design the safeguards and guardrails that we can really trust the technology and therefore, adopt it. Because I think we are all interested in get all these benefits and get all these opportunities. And therefore, that we can adopt it with trust. Trust for the consumers and trust for the citizens, which we are not saying here is giving all the answers to all the problems, what we are proposing here was what are the instruments that are not yet in place that are necessary because they are covering the gaps. And I think the other important thing for me, the — one of the most important recommendations is the scientific panel. We need transparency on the risks and on the opportunities. And without data and scientific evidence, not even policymakers can be sensitive roles to guide a properly.
SREENIVASAN: How do you create that incentive for transparency, right? Like right now, for example, when it comes to intellectual property, there’s a lot of concern that a lot of the large language models have been trained on copyrighted material. So, I wonder if you have this ability to convene different countries, whose law do you agree on? Whose intellectual property law are you going to go by? Whose human rights law are you going to go by? What is freedom of speech in one country versus another? How do you get through those kinds of thorny issues?
ARTIGAS: Well, I think there’s a difference that so many people mix, which is one thing is ethics, another thing is regulation, another thing is governance. When we’re talking about ethics is, how should companies or governments — because this affects also the use of A.I. by governments, how should they behave when in a morally acceptable way or in the way we expect them to behave, that’s ethics guidelines. But then, it comes governance. And governance means, which are the instruments I need to put in place to ensure that these companies and this governance are behaving ethically? And regulation is one of these tools, but it’s not the only one. I come from Europe, and I’ve been an active negotiator on the European A.I. Act, and we solve this on our European way, but it doesn’t need to work for everyone. What we expect here is that when we talk about governance, it can be through regulation, but it can be also through market incentives. It can be with oversight boards. It can be with treaties. It can be with many other ways. We are proposing some instruments to make this happen. And in terms of regulation, we cannot expect that all the part of the competition that will have the same regulation. But what we can expect that is a convergence on a very important minimum, which is that anything with on A.I. is for the common good. It’s based under the U.N. chapter, under the international law and based on human rights, and I think that’s the bare minimum we should ask to any country in the world and any company in the world.
SREENIVASAN: Yes. You are, for our audience that doesn’t know you, the Spanish secretary of state for digitization and artificial intelligence. So, you’ve had these conversations across Europe. And I wonder how did you balance the need for making sure that it’s comprehensive, that you understand the technology and at the same time, the sort of need for speed because so often we find, at least in the United States, regulation is about, I don’t know, five to eight years behind where the technology already is. So, by the time it gets maybe litigated in the court system, the technology has evolved so fast, right?
ARTIGAS: Well, exactly, that’s the big challenge. How can we regulate the technology, which is a continuous evolution? How can we make these laws, these regulations, or these best practices future proof? And that must be embedded in the law mechanism itself. So, in particular, the EU AI Act has its own renewable mechanisms, and a lot of the things that are proposed have been designed together with the industry. We have all the same principle. Anything that we are proposing here as the global governance are very agile instruments that can evolve according to the needs. But what we cannot do is to do nothing, to wait until the harm is done. Because governance has not — must not be seen as an inhibitor, not inhibitor of innovation. It must be seen as an enabler. If we give trust to consumers and users, people will adopt A.I. massively. I think that’s what we are not seeing. And I think the risk, we don’t need to wait five more years to know what are the potential risks. I think we are on time now to make things happen and to ensure that everybody’s doing the things right first time, because probably we can — we are not going to be able to revert back the potential harm we can create.
SREENIVASAN: You know, I looked at the report and the amazing kind of confluence of the number of experts who are very concerned about some of the negative risks of A.I., you know, when it comes to information integrity on how people are able to tell fact from fiction, I mean, that’s something that we are here now, right before an election in the United States thinking about much more closely. But I wonder, how — you know, what are the conversations that are necessary to try to figure out some sort of baseline of ensuring that, you know, a surveillance state doesn’t take over in a harmful way, or that information integrity is not kind of destroyed across different societies?
ARTIGAS: Exactly. This where we think there must be this consensus. I will say that we can compete for market share, we can compete for talent, but we cannot compete for safety. We cannot compete for human rights. I think that national countries will have to put their own regulations to limit the power of governments or companies. Again, in the EU AI Act, we put five cases what we consider forbidden uses of A.I. So, things that even though are technically feasible, we don’t want that to happen in Europe, for example, social scoring, where we know that other parts in the world is something widely accepted. So, we don’t pretend, through the U.N., to replace the role that all governments leaders need to put in place in their countries. What we’re seeing is wherever it is in the national level must be encompassed by a consensus in very important things, on what are the risks, how do we prevent from unintended misuses of technology, how do we set up guard rails that a risk in a country is also a risk in another country? Also, how do we align each other on the standard, technical standards? How do we set up scientific panels to really — all these risks you are mentioning are not just fears with no scientific evidence, because we are focusing a lot on the risk and we’re not focusing, therefore, on the opportunities, we firmly believe (ph) are huge. And I think we are all in the same boat, companies, citizens, and governments that we use A.I. for the good of humanity. And I think that is a great opportunity.
SREENIVASAN: Even if you wanted to focus on the potential benefits of A.I. there are significant concerns with the amount of energy that is necessary to power some of the data centers where all of this computing power would be working, right? So, here we are, on the one hand, in a climate crisis that is a significant, you know, kind of cost for the world. Are we making things worse when we are looking at how A.I. is developing today without really any environmental guardrails?
ARTIGAS: Absolutely. The level of development of A.I. with the level of consumption is not only of energy, but especially of water is not sustainable. And because we think that A.I. can be very positive for the development of sustainable development goals, we need to ask for sustainable requirements also to the software industry and they are required by any other industry. I think one of the recommendations that we expect the scientific panel will show some light is in how to do that in a better way. How can we be more efficient in the software development, so that we don’t have this excessive consumption, which is absolutely like contradictory or using A.I. to improve energy consumption or be more efficient. At the same time, the technology is not sustainable by itself. I think that’s the way we want to go. That’s where this international consensus must stipulate.
SREENIVASAN: Most of what we’re thinking about A.I. as consumers might be chatbots, but there are, you know, much darker uses of artificial intelligence that we are slowly starting to understand. One is in the use of autonomous weapons. And this a completely discreet conversation that has mostly military stakeholders and, you know, heads of state involved. So, I wonder, if in this kind of an advisory model, whether you’ve come up with anything to suggest or alter the course of how A.I. could be used in defense.
ARTIGAS: Well, in this particular matter, we see it. We don’t need to provide a different instrument because we already have the Geneva Convention. And what we are recommending in the report it is, of course, we, I would say, expect or claim for a treaty in 2026 to ban these autonomous weapons, but this is like a proposal that we do, but really the place where this must be discussed is in the Geneva Convention. So, we don’t need to create a different instrument for that. There is already a multilateral platform to discuss this topic. Of course, when we’re talking that A.I. must be for the good of humanity, we, of course, consider that this cannot be harmful for people.
SREENIVASAN: We’ve also already seen in — here in the United States horrible cases where the models that the artificial intelligence was trained on, especially for visual recognition, ends up creating biases and exacerbating biases from the people who might have been programming it. And some of that might be conscious, some of that might be unconscious. So, how do you figure out how to create any kind of a conversation, much less a standard so that companies in Europe and companies in the United States, and maybe even companies in China can say, here’s what to avoid to make sure that your data on the response and the output is better?
ARTIGAS: Yes. I would say that companies are very, very responsible. I see that all the software sector is very responsible, but they need to do things better. They need to improve the products. And when we are talking about this, for example, the high policy dialogue, we include the companies there. It’s not only the countries talking among them, we need to include academia, we need to include the companies, we need to include the government, the status (ph) society, and that’s where we need. There is a conversation in it. I think that all this going to progress on the technical point of view. And — but it is true that all these models are general purpose, then are going to be refined also by private data for specifically use cases on the industries. So, I think one is the normal evolution of the products and the other is how can we assess the risk that this can have, for example, in discrimination and fundamental rights or values. And that’s where, again, we need this conversation. We need to put together the developers, the users, the governments, policymakers and come to this consensus and standards.
SREENIVASAN: What do you think right now is the biggest obstacle of trying to establish something like this? Even if it’s not sort of a hard agency, but even the kind of softer steps that you’re suggesting, what does success look like in five years?
ARTIGAS: Well, I think now the first immediate step is to gain the support of the member states in the voting they have on the global digital compact, that’s going to take place on Sunday, within the discussions of the common agenda on the United Nations. So, the first thing is that this proposal that a group of independent experts that, as you mentioned, is not only 30 — the 39 members of our board, it’s more than 2,000 experts in the world that has — have participated in different consultations, more than 70 consultations all over the world. So, we are proud and we are, say, quite confident that these recommendations make sense and that these recommendations have gathered absolutely all the sensitivities. So, the next step is that we really gain the support of the permanent representatives on the United Nations and to push forward these initiatives. But if even all these recommendations are not adopted, I think we should start the discussion from the side of society point of view. We need to put all these challenges on the table and we create also, we expect that we create a conversation around these topics from now on.
SREENIVASAN: Carme Artigas, thanks so much for joining us.
ARTIGAS: Thank you so much.
About This Episode EXPAND
Correspondent Jeremy Diamond reports from Tel Aviv. David Suzuki and Bodhi Patil discuss their ongoing battle against climate change. Coralie Fargeat addresses beauty standards placed on women in her new film “The Substance.” Co-Chair of the UN Artificial Intelligence Advisory Body Carme Artigas talks about AI governance.
LEARN MORE