Susan D’Agostino
October 17, 2023
Editor’s note: This article was commissioned by the Bulletin of the Atomic Scientists and is also published in WIRED.
October 17, 2023
Editor’s note: This article was commissioned by the Bulletin of the Atomic Scientists and is also published in WIRED.
MONTREAL, Canada—The main
artery in Montreal’s Little Italy is lined with cafes, wine bars, and pastry
shops that spill into tree-lined, residential side streets. Generations of
farmers, butchers, bakers, and fishmongers sell farm-to-table goods in the neighborhood’s
large, open-air market, the Marché Jean-Talon. But the quiet enclave also
accommodates a modern, 90,000-square-foot global AI hub known as Mila–Quebec AI
Institute. Mila claims to house the largest concentration of deep learning
academic researchers in the world, including more than 1,000 researchers and
more than 100 professors who work with more than 100 industry partners from
around the globe.
Yoshua Bengio, Mila’s scientific director, is a pioneer in artificial neural networks and deep learning—an approach to machine learning inspired by the brain. In 2018, Bengio, Meta chief AI scientist Yann LeCun, and former Google AI researcher Geoffrey Hinton received the Turing Award—known as the “Nobel” of computing—for “conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.” Together, the three computer scientists are known as the “godfathers of AI.”
In July, Bengio spoke to a US Senate subcommittee that is considering possible legislation to regulate the fast-evolving technology. There, he explained that he and other top AI researchers have revised their estimates for when artificial intelligence could achieve human levels of broad cognitive competence.
“Previously thought to be decades or even centuries away, we now believe it could be within a few years or decades,” Bengio told the senators. “The shorter timeframe, say five years, is really worrisome because we will need more time to effectively mitigate the potentially significant threats to democracy, national security, and our collective future.”
Last month, on a day when temperatures in Montreal soared past 90 degrees Fahrenheit, Bengio and I sat down in his office to discuss nuance in attention-grabbing headlines about AI, taboos among AI researchers, and why top AI researchers may disagree about the risks AI may pose to humanity. This interview has been edited and condensed for clarity.
Susan D’Agostino: A BBC headline in May of this year declared that you felt “lost” over your life’s work. But you wrote in your blog that you never made this statement. Still, you appear to be reflecting deeply. How would you characterize what you’re going through?
Yoshua Bengio: I said something related to that but the nuance was lost.
I experienced an intense shift over the winter in the way I was perceiving my own work, its purpose, and my priorities. What I tried to express in the BBC interview and then with my blog later is this: It’s not that I’m lost. I’m questioning and realizing that maybe I didn’t pay attention to something important, in a way that’s not just intellectual but also emotional.
I’ve read some of the works about existential risk for at least a decade. Students here in my group and nearby were talking about this. At a technical level, I took very seriously Stuart Russell’s book [Human Compatible: Artificial Intelligence and the Problem of Control] that came out in 2019.
But I didn’t take it seriously emotionally speaking. I was thinking, “Oh yeah, this is something people should look at.” But I was really thinking, “This is far away in the future.” I could continue on my path because what I’m doing will be useful and scientifically important. We want to understand intelligence in humans and animals and build machines that could help us address all kinds of important and difficult challenges. So, I continued without changing anything in the way I was working.
But over the winter, it dawned on me that the dual use nature and the potential for loss of control were very serious. It could happen much earlier than I had projected. I couldn’t continue ignoring this. I had to change what I was doing. Also, I had to speak about it because very few people—even in the AI community—took these questions seriously until the last few months. In particular, I and people like Geoffrey Hinton and a few others were listened to more than those folks who were talking about this and had realized the importance of those risks much earlier than we did.
There’s a psychological aspect here. You construct your identity or the meaning of your life in a particular form. Then somebody tells you, or you realize through reasoning, that the way you’ve painted yourself isn’t true to reality. There’s something really important that you’re ignoring. I understand why it’s very difficult for many of my colleagues first to accept something like this for themselves and then have the courage to speak out about something [the potential catastrophic threats from AI] that’s essentially been taboo in our community for forever.
It’s difficult. People travel this path at different times or at different rates. That’s okay. I have a lot of respect for my colleagues who don’t see things the same way as I do. I was in the same place a year ago.
D’Agostino: How did that taboo express itself in the AI research community earlier—or even still today?
Bengio: The folks who were talking about existential risk were essentially not publishing in mainstream scientific venues. It worked two ways. They encountered resistance when trying to talk or trying to submit papers. But also, they mostly turned their backs on the mainstream scientific venues in their field.
What has happened in the last six months is breaking that barrier.
D’Agostino: Your evolution coincided with the public’s rising awareness of large language models, sparked by OpenAI’s release of ChatGPT in late 2022. Initially, many in the public were wowed, even intimidated by ChatGPT. But now, some are unimpressed. The Atlantic ran a story, “ChatGPT Is Dumber Than You Think.” In a recent US Republican presidential debate, Chris Christie told Vivek Ramaswamy that he’d “had enough already tonight of a guy who sounds like ChatGPT.”
Was the timing of your realization influenced by ChatGPT’s release? Either way, do you now see ChatGPT as a punchline?
Bengio: My trajectory was the opposite of what you’re describing. I want to understand and bridge the gap between current AI and human intelligence. What’s missing in ChatGPT? Where does it fail? I tried to set up questions that would make it do something stupid, like many of my peers in the months that followed ChatGPT’s release.
In my first two months of playing with it, I was mostly comforted in my beliefs that it’s still missing something fundamental. I wasn’t worried. But after playing with it enough, it dawned on me that it was an amazingly surprising advance. We’re moving much faster than I anticipated. I have the impression that it might not take a lot to fix what’s missing.
Every month I was coming up with a new idea that might be the key to breaking that barrier. It hasn’t happened, but it could happen quickly—maybe not my group, but maybe another group. Maybe it’s going to take 10 years. With research, it may feel like you’re very close, but there could be some obstacle you didn’t consider.
If we combine what I was working on regarding the ability to represent incredibly complex probability distributions, which is what this is about, and the ability to learn and use such an amazing amount of information in more reasoned ways, then we could be pretty close. The ability to use information in intuitive ways corresponds to what I and others call “system one abilities.” The little thin layer that is missing, known as “system two abilities,” is reasoning abilities.
I started thinking, what if, within a year, we bridge that gap, and then it scaled up? What’s going to happen?
D’Agostino: What did you do once you realized this?
Bengio: At the end of March, before the first [Future of Life Institute] letter [calling on AI labs to immediately pause giant AI experiments] came out, I reached out to Geoff [Hinton, who received the Turing Award along with Bengio and LeCun]. I tried to convince him to sign the letter. I was surprised to see that we had independently arrived at the same conclusion.
D’Agostino: This reminds me of when Issac Newton and Godfried Leibnitz independently discovered calculus at the same time. Was the moment ripe for a multiple, independent discovery?
Bengio: Don’t forget, we had realized something that others had already discovered.
Also, Geoff argued that digital computing technologies have fundamental advantages over brains. In other words, even if we only figure out the principles that are sufficient to explain most of our intelligence and put that in machines, the machines would automatically be smarter than us because of technical things like the ability to read huge quantities of text and integrate that much faster than the human could—like tens of thousands or millions of times faster.
If we were to bridge that gap, we would have machines that were smarter than us. How much does it mean practically? Nobody knows. But you could easily imagine they would be better than us in doing things like programming, launching cyberattacks, or designing things that biologists or chemists currently design by hand.
I’ve been working for the last three years on machine learning for science, particularly applied to chemistry and biology. The goal was to help design better medicines and materials, respectively for fighting pandemics and climate change. But the same techniques could be used to design something lethal. That realization slowly accumulated, and I signed the letter.
D’Agostino: Your reckoning drew a lot of attention, including the BBC article. How did you fare?
Bengio: The media forced me to articulate all these thoughts. That was a good thing. More recently, in the last few months, I’ve been thinking more about what we should do in terms of policy. How do we mitigate the risks? I’ve also been thinking about countermeasures.
Some might say, “Oh, Yoshua is trying to scare.” But I’m a positive person. I’m not a doomer like people may call me. There’s a problem, and I’m thinking about solutions. I want to discuss them with others who may have something to bring. The research on improving AI’s capabilities is racing ahead because there’s now a lot—a lot—more money invested in this. It means mitigating the largest risks is urgent.
D’Agostino: We have regulation and international treaties for nuclear risk, but, for example, North Korea is not at the table. Could we ever really contain the risk that AI poses?
Bengio: Depending on how cautious we end up being collectively, we could more or less contain the risks with national regulation and international treaties. It’s important, like for nuclear treaties, to have minimal standards across nations. The harms that AI could do are not bounded by national borders.
There’s no 100 percent guarantee that nothing bad will happen. Even if we had an international treaty that bans AI more powerful than some level, somebody will disrespect those constraints. But delaying that by, say, 10 years would be great. In that time, we might improve our monitoring. We might improve our defenses. We might better understand the risks.
Currently it takes a lot of money and specialized hardware to build. Right now, you can’t really buy hardware like GPUs [graphics processing units] in very large quantities without being noticed, but governments are not tracking who’s buying what. They could start by doing that. The US already had export controls for these things, which is hurting China and North Korea if they want to be in that race.
Time is of the essence, and regulation can reduce the probabilities of catastrophes or, equivalently, push back the time when something really bad is going to happen. Or minimize the amplitude of what may happen.
Unfortunately, the dropping of the bombs in Hiroshima and Nagasaki really is the reason why governments came around the table and were willing to discuss, despite the Cold War. I hope we don’t need to have that level of catastrophe before we act. But it may come to that.
D’Agostino: The public likes to talk about artificial general intelligence. Do you think that could happen overnight? Or is there a continuum there?
Bengio: There’s a continuum. Absolutely. I used to not like that term because I think completely general intelligence can’t exist the way it was defined 20 years ago. But now the way people understand it and the way media uses it just means “an AI system that is good at a lot of things.”
From the point of view of harm, it doesn’t matter if it’s better than us at everything. Maybe there’s some game in which we win. If they’re better than us at things that could harm us, who cares [about a game we could win]? What matters is their abilities in areas that could yield harm. That’s what we should be concerned about—not some uber definition of, “oh, we have AGI.” It could be dangerous even now if we design it with malicious goals in mind.
D’Agostino: Is there a potential interplay of AI and climate change that carries risk?
Bengio: AI should mostly help climate change. I don’t see AI using climate change as a weapon unless we had a loss-of-control situation. Then, changing the climate might be a way [for AI] to accelerate our destruction or wreak havoc in society.
D’Agostino: But how could a machine change the climate?
Bengio: This is an important question. A lot of people are responding to concerns [about AI risks] by saying, “how could a computer do anything in the real world? It’s all virtual. We understand cyberattacks because it’s all in the machine.”
But it’s very simple. There are many ways in which things that happen on computers can affect us. First of all, a lot of our economic infrastructure rides on computers—our communications, our supply chains, our energy, our electricity, and transportation. Imagine if many of these things were to fail because of cyberattacks. It might not destroy everyone, but it might bring society to such a chaos that the amount of suffering could be huge.
It’s also plausible that, given progress with manipulating and understanding language with AI systems, we could have AI systems in a few years that could influence humans. They could talk us into doing things for them.
Think about conspiracy theorists [as an analogy]. An AI system could be loose on the Internet. There it could start playing with people on social media to try to see what dialogue would succeed in changing people’s minds about something. That could bring us to do some little actions that, with other actions, could yield a catastrophic outcome. That is plausible.
Of course, maybe that won’t happen. Maybe we have enough defenses, and maybe humans are hard to convince. But everything I’ve seen in recent years with conspiracy theories makes me think that we are very influenceable. Maybe not everyone is, but it’s not necessary for everyone to be. It would only need enough people or enough people in power to make a difference and create catastrophic outcomes. So, the humans might do the dirty work.
D’Agostino: So, human ignorance is an Achilles heel. Are we potentially vulnerable to AI in other ways?
Bengio: Here is another, even simpler [threat] that doesn’t even require nearly as much belief in things that don’t exist right now. An AI system could buy off people to do a job. Criminal organizations do things you ask them to do for money, and they don’t ask where the money comes from. It’s fairly easy even now for a person or a machine to have boots on the ground through computers and accessing the Dark Web.
If we look at least five years ahead, it’s also plausible that we figure out robotics. Right now, we have machines that are good at vision. We also have machines that are good at language. The third leg of a robot is control—having a body that you can control in order to achieve goals. There’s a lot of research in machine learning for robotics, but it hasn’t had its breakthrough moment like we’ve had for language and vision in the last decade. But it could come sooner than we think.
D’Agostino: What’s the obstacle in having a breakthrough moment with robotics?
Bengio: [In robotics], we don’t have the scale of data that we have, for example, for language and images. We don’t have a deployment of 100 million robots that could collect huge quantities of data like we are able to do with text. Even across cultures, text works. You can mix text from different languages and take advantage of the union of all of them. That hasn’t been done for robotics. But somebody with enough money could do it in the next few years. If that’s the case, then the AI would have boots on the ground.
Right now, an AI system that’s become rogue and autonomous would still need humans to get electricity and parts. It would need a functioning human society right now. But that could change within a decade. It’s possible enough that we should not count on this protecting us.
D’Agostino: Tell me about your concerns with the potential interplay of AI with nuclear risk or biosecurity?
Bengio: We really want to avoid making it easy for an AI system to control the launch of nuclear weapons. AI in the military is super dangerous, even existential. We need to accelerate the international effort to ban lethal autonomous weapons.
In general, we should keep AIs away from anything we’ve got that can produce harm quickly. Biosecurity is probably even more dangerous than the nuclear danger associated with AI. Lots of companies will take a file that you send them that specifies a DNA sequence and then program some bacteria, yeast, or viruses that will have those sequences in their code and will generate the corresponding proteins. It’s very cheap. It’s quick. Usually, this is for good—to create new drugs, for example. But the companies that are doing that may not have the technical means of knowing that the sequence you sent them could be used in malicious ways.
We need experts in AI and in biotechnology to work out this regulation to minimize those risks. If you’re an established pharma company, fine. But somebody in their garage should not be allowed to create new species.
D’Agostino: What sense do you make of the pronounced disagreements between you and other top AI researchers, including your co-Turing Award recipient Yann LeCun, who did not sign the Future of Life Institute letter, about the potential dangers of AI?
Bengio: I wish I understood better why people who are mostly aligned in terms of values, rationality, and experience come to such different conclusions.
Maybe some psychological factors are at play. Maybe it depends on where you’re coming from. If you’re working for a company that is selling the idea that AI is going to be good, it may be harder to turn around like I’ve done. There’s a good reason why Geoff left Google before speaking. Maybe the psychological factors are not always conscious. Many of these people act in good faith and are sincere.
Also, to think about these problems, you have to go into a mode of thinking which many scientists try to avoid. Scientists in my field and other fields like to express conclusions publicly that are based on very solid evidence. You do an experiment. You repeat it 10 times. You have statistical confidence because it’s repeated. Here we’re talking about things that are much more uncertain, and we can’t experiment. We don’t have a history of 10 times in the past when dangerous AI systems rose. The posture of saying, “well it is outside of the zone where I can feel confident saying something,” is very natural and easy. I understand it.
But the stakes are so high that we have a duty to look ahead into an uncertain future. We need to consider potential scenarios and think about what can be done. Researchers in other disciplines, like ethical science or in the social sciences, do this when they can’t do experiments. We have to make decisions even though we don’t have mathematical models of how things will unfold. It’s uncomfortable for scientists to go to that place. But because we know things that others don’t, we have a duty to go there, even if it is uncomfortable.
In addition to those who hold different opinions, there’s a vast silent majority of researchers who, because of that uncertainty, don’t feel sufficiently confident to take a position one way or the other.
D’Agostino: How did your colleagues at Mila react to your reckoning about your life’s work?
Bengio: The most frequent reaction here at Mila was from people who were mostly worried about the current harms of AI—issues related to discrimination and human rights. They were afraid that talking about these future, science-fiction-sounding risks would detract from the discussion of the injustice that is going on—the concentration of power and the lack of diversity and of voice for minorities or people in other countries that are on the receiving end of whatever we do.
I’m totally with them, except that it’s not one or the other. We have to deal with all the issues. There’s been progress on that front. People understand that it’s unreasonable to discard the existential risks or, as I prefer to call them, catastrophic risks. [The latter] doesn’t mean humans are gone, but a lot of suffering might come.
There are also other voices—mostly coming from industry—that say, “No, don’t worry! Let us handle it! We’ll self-regulate and protect the public!” This very strong voice has a lot of influence over governments.
People who feel like humanity has something to lose should not be infighting. They should speak in one voice to make governments move. Just as we’ve had public discussions about the danger of nuclear weapons and climate change, the public needs to come to grips that there is yet another danger that has a similar magnitude of potential risks.
D’Agostino: When you think about the potential for artificial intelligence to threaten humanity, where do you land on a continuum of despair to hope?
Bengio: What’s the right word? In French, it’s impuissant. It’s a feeling that there’s a problem, but I can’t solve it. It’s worse than that, as I think it is solvable. If we all agreed on a particular protocol, we could completely avoid the problem.
Climate change is similar. If we all decided to do the right thing, we could stop the problem right now. There would be a cost, but we could do it. It’s the same for AI. There are things we could do. We could all decide not to build things that we don’t know for sure are safe. It’s very simple.
But that goes so much against the way our economy and our political systems are organized. It seems very hard to achieve that until something catastrophic happens. Then maybe people will take it more seriously. But even then, it’s hard because you have to convince everyone to behave properly.
Climate change is easier. It’s alright if we only convince 99 percent of humans to behave well. With AI, you need to ensure no one is going to do something dangerous.
“Powerless,” I think, is the translation of impuissant.
I’m not completely powerless, because I can speak, and I can try to convince others to move in the right direction. There are things we can do to reduce those risks.
Regulation is not perfect, but it might slow things down. For example, if the number of people who are allowed to manipulate potentially dangerous AI systems is reduced to a few hundred in the world, it could reduce the risks by more than 1,000 times. That’s worth doing, and it’s not that hard. We don’t allow anybody to drive a passenger jet. We regulate flying, and that reduces the accident rate by a whole lot.
Even if we find ways to build AI systems that are safe, they might not be safe from the point of view of preserving democracy. They could still be very powerful, and power gets to your head if you’re human. So, we could lose democracy.
D’Agostino: How do you jump from the existence of a safe-but-powerful AI system to losing democracy?
Bengio: For example, it could start with economic dominance. I’m not saying that companies can’t do things that are useful, but their mission is to dominate others and make money.
If one company makes faster progress with AI at some moment, it could take over economically. They provide everything much better and cheaper than anyone else. Then they use their power to control politics, because that’s the way our system works. On that path, we may end up with a world government dictatorship.
It’s not just about regulating the use of AI and who has access. We need to prepare for the day when there’s going to be an abuse of that power by people or by an AI system if we lost control and it had its own goals.
D’Agostino: Do you have a suggestion for how we might better prepare?
Bengio: In the future, we’ll need a humanity defense organization. We have defense organizations within each country. We’ll need to organize internationally a way to protect ourselves against events that could otherwise destroy us.
It’s a longer-term view, and it would take a lot of time to have multiple countries agree on the right investments. But right now, all the investment is happening in the private sector. There’s nothing that’s going on with a public-good objective that could defend humanity.
Yoshua Bengio, Mila’s scientific director, is a pioneer in artificial neural networks and deep learning—an approach to machine learning inspired by the brain. In 2018, Bengio, Meta chief AI scientist Yann LeCun, and former Google AI researcher Geoffrey Hinton received the Turing Award—known as the “Nobel” of computing—for “conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.” Together, the three computer scientists are known as the “godfathers of AI.”
In July, Bengio spoke to a US Senate subcommittee that is considering possible legislation to regulate the fast-evolving technology. There, he explained that he and other top AI researchers have revised their estimates for when artificial intelligence could achieve human levels of broad cognitive competence.
“Previously thought to be decades or even centuries away, we now believe it could be within a few years or decades,” Bengio told the senators. “The shorter timeframe, say five years, is really worrisome because we will need more time to effectively mitigate the potentially significant threats to democracy, national security, and our collective future.”
Last month, on a day when temperatures in Montreal soared past 90 degrees Fahrenheit, Bengio and I sat down in his office to discuss nuance in attention-grabbing headlines about AI, taboos among AI researchers, and why top AI researchers may disagree about the risks AI may pose to humanity. This interview has been edited and condensed for clarity.
Susan D’Agostino: A BBC headline in May of this year declared that you felt “lost” over your life’s work. But you wrote in your blog that you never made this statement. Still, you appear to be reflecting deeply. How would you characterize what you’re going through?
Yoshua Bengio: I said something related to that but the nuance was lost.
I experienced an intense shift over the winter in the way I was perceiving my own work, its purpose, and my priorities. What I tried to express in the BBC interview and then with my blog later is this: It’s not that I’m lost. I’m questioning and realizing that maybe I didn’t pay attention to something important, in a way that’s not just intellectual but also emotional.
I’ve read some of the works about existential risk for at least a decade. Students here in my group and nearby were talking about this. At a technical level, I took very seriously Stuart Russell’s book [Human Compatible: Artificial Intelligence and the Problem of Control] that came out in 2019.
But I didn’t take it seriously emotionally speaking. I was thinking, “Oh yeah, this is something people should look at.” But I was really thinking, “This is far away in the future.” I could continue on my path because what I’m doing will be useful and scientifically important. We want to understand intelligence in humans and animals and build machines that could help us address all kinds of important and difficult challenges. So, I continued without changing anything in the way I was working.
But over the winter, it dawned on me that the dual use nature and the potential for loss of control were very serious. It could happen much earlier than I had projected. I couldn’t continue ignoring this. I had to change what I was doing. Also, I had to speak about it because very few people—even in the AI community—took these questions seriously until the last few months. In particular, I and people like Geoffrey Hinton and a few others were listened to more than those folks who were talking about this and had realized the importance of those risks much earlier than we did.
There’s a psychological aspect here. You construct your identity or the meaning of your life in a particular form. Then somebody tells you, or you realize through reasoning, that the way you’ve painted yourself isn’t true to reality. There’s something really important that you’re ignoring. I understand why it’s very difficult for many of my colleagues first to accept something like this for themselves and then have the courage to speak out about something [the potential catastrophic threats from AI] that’s essentially been taboo in our community for forever.
It’s difficult. People travel this path at different times or at different rates. That’s okay. I have a lot of respect for my colleagues who don’t see things the same way as I do. I was in the same place a year ago.
D’Agostino: How did that taboo express itself in the AI research community earlier—or even still today?
Bengio: The folks who were talking about existential risk were essentially not publishing in mainstream scientific venues. It worked two ways. They encountered resistance when trying to talk or trying to submit papers. But also, they mostly turned their backs on the mainstream scientific venues in their field.
What has happened in the last six months is breaking that barrier.
D’Agostino: Your evolution coincided with the public’s rising awareness of large language models, sparked by OpenAI’s release of ChatGPT in late 2022. Initially, many in the public were wowed, even intimidated by ChatGPT. But now, some are unimpressed. The Atlantic ran a story, “ChatGPT Is Dumber Than You Think.” In a recent US Republican presidential debate, Chris Christie told Vivek Ramaswamy that he’d “had enough already tonight of a guy who sounds like ChatGPT.”
Was the timing of your realization influenced by ChatGPT’s release? Either way, do you now see ChatGPT as a punchline?
Bengio: My trajectory was the opposite of what you’re describing. I want to understand and bridge the gap between current AI and human intelligence. What’s missing in ChatGPT? Where does it fail? I tried to set up questions that would make it do something stupid, like many of my peers in the months that followed ChatGPT’s release.
In my first two months of playing with it, I was mostly comforted in my beliefs that it’s still missing something fundamental. I wasn’t worried. But after playing with it enough, it dawned on me that it was an amazingly surprising advance. We’re moving much faster than I anticipated. I have the impression that it might not take a lot to fix what’s missing.
Every month I was coming up with a new idea that might be the key to breaking that barrier. It hasn’t happened, but it could happen quickly—maybe not my group, but maybe another group. Maybe it’s going to take 10 years. With research, it may feel like you’re very close, but there could be some obstacle you didn’t consider.
If we combine what I was working on regarding the ability to represent incredibly complex probability distributions, which is what this is about, and the ability to learn and use such an amazing amount of information in more reasoned ways, then we could be pretty close. The ability to use information in intuitive ways corresponds to what I and others call “system one abilities.” The little thin layer that is missing, known as “system two abilities,” is reasoning abilities.
I started thinking, what if, within a year, we bridge that gap, and then it scaled up? What’s going to happen?
D’Agostino: What did you do once you realized this?
Bengio: At the end of March, before the first [Future of Life Institute] letter [calling on AI labs to immediately pause giant AI experiments] came out, I reached out to Geoff [Hinton, who received the Turing Award along with Bengio and LeCun]. I tried to convince him to sign the letter. I was surprised to see that we had independently arrived at the same conclusion.
D’Agostino: This reminds me of when Issac Newton and Godfried Leibnitz independently discovered calculus at the same time. Was the moment ripe for a multiple, independent discovery?
Bengio: Don’t forget, we had realized something that others had already discovered.
Also, Geoff argued that digital computing technologies have fundamental advantages over brains. In other words, even if we only figure out the principles that are sufficient to explain most of our intelligence and put that in machines, the machines would automatically be smarter than us because of technical things like the ability to read huge quantities of text and integrate that much faster than the human could—like tens of thousands or millions of times faster.
If we were to bridge that gap, we would have machines that were smarter than us. How much does it mean practically? Nobody knows. But you could easily imagine they would be better than us in doing things like programming, launching cyberattacks, or designing things that biologists or chemists currently design by hand.
I’ve been working for the last three years on machine learning for science, particularly applied to chemistry and biology. The goal was to help design better medicines and materials, respectively for fighting pandemics and climate change. But the same techniques could be used to design something lethal. That realization slowly accumulated, and I signed the letter.
D’Agostino: Your reckoning drew a lot of attention, including the BBC article. How did you fare?
Bengio: The media forced me to articulate all these thoughts. That was a good thing. More recently, in the last few months, I’ve been thinking more about what we should do in terms of policy. How do we mitigate the risks? I’ve also been thinking about countermeasures.
Some might say, “Oh, Yoshua is trying to scare.” But I’m a positive person. I’m not a doomer like people may call me. There’s a problem, and I’m thinking about solutions. I want to discuss them with others who may have something to bring. The research on improving AI’s capabilities is racing ahead because there’s now a lot—a lot—more money invested in this. It means mitigating the largest risks is urgent.
D’Agostino: We have regulation and international treaties for nuclear risk, but, for example, North Korea is not at the table. Could we ever really contain the risk that AI poses?
Bengio: Depending on how cautious we end up being collectively, we could more or less contain the risks with national regulation and international treaties. It’s important, like for nuclear treaties, to have minimal standards across nations. The harms that AI could do are not bounded by national borders.
There’s no 100 percent guarantee that nothing bad will happen. Even if we had an international treaty that bans AI more powerful than some level, somebody will disrespect those constraints. But delaying that by, say, 10 years would be great. In that time, we might improve our monitoring. We might improve our defenses. We might better understand the risks.
Currently it takes a lot of money and specialized hardware to build. Right now, you can’t really buy hardware like GPUs [graphics processing units] in very large quantities without being noticed, but governments are not tracking who’s buying what. They could start by doing that. The US already had export controls for these things, which is hurting China and North Korea if they want to be in that race.
Time is of the essence, and regulation can reduce the probabilities of catastrophes or, equivalently, push back the time when something really bad is going to happen. Or minimize the amplitude of what may happen.
Unfortunately, the dropping of the bombs in Hiroshima and Nagasaki really is the reason why governments came around the table and were willing to discuss, despite the Cold War. I hope we don’t need to have that level of catastrophe before we act. But it may come to that.
D’Agostino: The public likes to talk about artificial general intelligence. Do you think that could happen overnight? Or is there a continuum there?
Bengio: There’s a continuum. Absolutely. I used to not like that term because I think completely general intelligence can’t exist the way it was defined 20 years ago. But now the way people understand it and the way media uses it just means “an AI system that is good at a lot of things.”
From the point of view of harm, it doesn’t matter if it’s better than us at everything. Maybe there’s some game in which we win. If they’re better than us at things that could harm us, who cares [about a game we could win]? What matters is their abilities in areas that could yield harm. That’s what we should be concerned about—not some uber definition of, “oh, we have AGI.” It could be dangerous even now if we design it with malicious goals in mind.
D’Agostino: Is there a potential interplay of AI and climate change that carries risk?
Bengio: AI should mostly help climate change. I don’t see AI using climate change as a weapon unless we had a loss-of-control situation. Then, changing the climate might be a way [for AI] to accelerate our destruction or wreak havoc in society.
D’Agostino: But how could a machine change the climate?
Bengio: This is an important question. A lot of people are responding to concerns [about AI risks] by saying, “how could a computer do anything in the real world? It’s all virtual. We understand cyberattacks because it’s all in the machine.”
But it’s very simple. There are many ways in which things that happen on computers can affect us. First of all, a lot of our economic infrastructure rides on computers—our communications, our supply chains, our energy, our electricity, and transportation. Imagine if many of these things were to fail because of cyberattacks. It might not destroy everyone, but it might bring society to such a chaos that the amount of suffering could be huge.
It’s also plausible that, given progress with manipulating and understanding language with AI systems, we could have AI systems in a few years that could influence humans. They could talk us into doing things for them.
Think about conspiracy theorists [as an analogy]. An AI system could be loose on the Internet. There it could start playing with people on social media to try to see what dialogue would succeed in changing people’s minds about something. That could bring us to do some little actions that, with other actions, could yield a catastrophic outcome. That is plausible.
Of course, maybe that won’t happen. Maybe we have enough defenses, and maybe humans are hard to convince. But everything I’ve seen in recent years with conspiracy theories makes me think that we are very influenceable. Maybe not everyone is, but it’s not necessary for everyone to be. It would only need enough people or enough people in power to make a difference and create catastrophic outcomes. So, the humans might do the dirty work.
D’Agostino: So, human ignorance is an Achilles heel. Are we potentially vulnerable to AI in other ways?
Bengio: Here is another, even simpler [threat] that doesn’t even require nearly as much belief in things that don’t exist right now. An AI system could buy off people to do a job. Criminal organizations do things you ask them to do for money, and they don’t ask where the money comes from. It’s fairly easy even now for a person or a machine to have boots on the ground through computers and accessing the Dark Web.
If we look at least five years ahead, it’s also plausible that we figure out robotics. Right now, we have machines that are good at vision. We also have machines that are good at language. The third leg of a robot is control—having a body that you can control in order to achieve goals. There’s a lot of research in machine learning for robotics, but it hasn’t had its breakthrough moment like we’ve had for language and vision in the last decade. But it could come sooner than we think.
D’Agostino: What’s the obstacle in having a breakthrough moment with robotics?
Bengio: [In robotics], we don’t have the scale of data that we have, for example, for language and images. We don’t have a deployment of 100 million robots that could collect huge quantities of data like we are able to do with text. Even across cultures, text works. You can mix text from different languages and take advantage of the union of all of them. That hasn’t been done for robotics. But somebody with enough money could do it in the next few years. If that’s the case, then the AI would have boots on the ground.
Right now, an AI system that’s become rogue and autonomous would still need humans to get electricity and parts. It would need a functioning human society right now. But that could change within a decade. It’s possible enough that we should not count on this protecting us.
D’Agostino: Tell me about your concerns with the potential interplay of AI with nuclear risk or biosecurity?
Bengio: We really want to avoid making it easy for an AI system to control the launch of nuclear weapons. AI in the military is super dangerous, even existential. We need to accelerate the international effort to ban lethal autonomous weapons.
In general, we should keep AIs away from anything we’ve got that can produce harm quickly. Biosecurity is probably even more dangerous than the nuclear danger associated with AI. Lots of companies will take a file that you send them that specifies a DNA sequence and then program some bacteria, yeast, or viruses that will have those sequences in their code and will generate the corresponding proteins. It’s very cheap. It’s quick. Usually, this is for good—to create new drugs, for example. But the companies that are doing that may not have the technical means of knowing that the sequence you sent them could be used in malicious ways.
We need experts in AI and in biotechnology to work out this regulation to minimize those risks. If you’re an established pharma company, fine. But somebody in their garage should not be allowed to create new species.
D’Agostino: What sense do you make of the pronounced disagreements between you and other top AI researchers, including your co-Turing Award recipient Yann LeCun, who did not sign the Future of Life Institute letter, about the potential dangers of AI?
Bengio: I wish I understood better why people who are mostly aligned in terms of values, rationality, and experience come to such different conclusions.
Maybe some psychological factors are at play. Maybe it depends on where you’re coming from. If you’re working for a company that is selling the idea that AI is going to be good, it may be harder to turn around like I’ve done. There’s a good reason why Geoff left Google before speaking. Maybe the psychological factors are not always conscious. Many of these people act in good faith and are sincere.
Also, to think about these problems, you have to go into a mode of thinking which many scientists try to avoid. Scientists in my field and other fields like to express conclusions publicly that are based on very solid evidence. You do an experiment. You repeat it 10 times. You have statistical confidence because it’s repeated. Here we’re talking about things that are much more uncertain, and we can’t experiment. We don’t have a history of 10 times in the past when dangerous AI systems rose. The posture of saying, “well it is outside of the zone where I can feel confident saying something,” is very natural and easy. I understand it.
But the stakes are so high that we have a duty to look ahead into an uncertain future. We need to consider potential scenarios and think about what can be done. Researchers in other disciplines, like ethical science or in the social sciences, do this when they can’t do experiments. We have to make decisions even though we don’t have mathematical models of how things will unfold. It’s uncomfortable for scientists to go to that place. But because we know things that others don’t, we have a duty to go there, even if it is uncomfortable.
In addition to those who hold different opinions, there’s a vast silent majority of researchers who, because of that uncertainty, don’t feel sufficiently confident to take a position one way or the other.
D’Agostino: How did your colleagues at Mila react to your reckoning about your life’s work?
Bengio: The most frequent reaction here at Mila was from people who were mostly worried about the current harms of AI—issues related to discrimination and human rights. They were afraid that talking about these future, science-fiction-sounding risks would detract from the discussion of the injustice that is going on—the concentration of power and the lack of diversity and of voice for minorities or people in other countries that are on the receiving end of whatever we do.
I’m totally with them, except that it’s not one or the other. We have to deal with all the issues. There’s been progress on that front. People understand that it’s unreasonable to discard the existential risks or, as I prefer to call them, catastrophic risks. [The latter] doesn’t mean humans are gone, but a lot of suffering might come.
There are also other voices—mostly coming from industry—that say, “No, don’t worry! Let us handle it! We’ll self-regulate and protect the public!” This very strong voice has a lot of influence over governments.
People who feel like humanity has something to lose should not be infighting. They should speak in one voice to make governments move. Just as we’ve had public discussions about the danger of nuclear weapons and climate change, the public needs to come to grips that there is yet another danger that has a similar magnitude of potential risks.
D’Agostino: When you think about the potential for artificial intelligence to threaten humanity, where do you land on a continuum of despair to hope?
Bengio: What’s the right word? In French, it’s impuissant. It’s a feeling that there’s a problem, but I can’t solve it. It’s worse than that, as I think it is solvable. If we all agreed on a particular protocol, we could completely avoid the problem.
Climate change is similar. If we all decided to do the right thing, we could stop the problem right now. There would be a cost, but we could do it. It’s the same for AI. There are things we could do. We could all decide not to build things that we don’t know for sure are safe. It’s very simple.
But that goes so much against the way our economy and our political systems are organized. It seems very hard to achieve that until something catastrophic happens. Then maybe people will take it more seriously. But even then, it’s hard because you have to convince everyone to behave properly.
Climate change is easier. It’s alright if we only convince 99 percent of humans to behave well. With AI, you need to ensure no one is going to do something dangerous.
“Powerless,” I think, is the translation of impuissant.
I’m not completely powerless, because I can speak, and I can try to convince others to move in the right direction. There are things we can do to reduce those risks.
Regulation is not perfect, but it might slow things down. For example, if the number of people who are allowed to manipulate potentially dangerous AI systems is reduced to a few hundred in the world, it could reduce the risks by more than 1,000 times. That’s worth doing, and it’s not that hard. We don’t allow anybody to drive a passenger jet. We regulate flying, and that reduces the accident rate by a whole lot.
Even if we find ways to build AI systems that are safe, they might not be safe from the point of view of preserving democracy. They could still be very powerful, and power gets to your head if you’re human. So, we could lose democracy.
D’Agostino: How do you jump from the existence of a safe-but-powerful AI system to losing democracy?
Bengio: For example, it could start with economic dominance. I’m not saying that companies can’t do things that are useful, but their mission is to dominate others and make money.
If one company makes faster progress with AI at some moment, it could take over economically. They provide everything much better and cheaper than anyone else. Then they use their power to control politics, because that’s the way our system works. On that path, we may end up with a world government dictatorship.
It’s not just about regulating the use of AI and who has access. We need to prepare for the day when there’s going to be an abuse of that power by people or by an AI system if we lost control and it had its own goals.
D’Agostino: Do you have a suggestion for how we might better prepare?
Bengio: In the future, we’ll need a humanity defense organization. We have defense organizations within each country. We’ll need to organize internationally a way to protect ourselves against events that could otherwise destroy us.
It’s a longer-term view, and it would take a lot of time to have multiple countries agree on the right investments. But right now, all the investment is happening in the private sector. There’s nothing that’s going on with a public-good objective that could defend humanity.
No comments:
Post a Comment