It’s pretty easy to think of scenarios in which AI would want to destroy humanity, but are there any in which it wants to save humanity? During a conversation between James W. Phillips, special advisor to the British prime minister for science and technology, and Eliezer Yudkowsky, head of research at the Machine Intelligence Research Institute, Yudkowsky suggests that AI could save humanity, but that “it’s not likely to.” Here’s the relevant portion of their conversation from the Spectator:
JP: One of the things that’s shifted in the past few years is the realization that throwing very large amounts of data and computation into machines produces effects in ways that we can’t predict. This is unlike other tech issues. When you see a nuclear reactor, for example, you’ve got the blueprint, you’ve got the designs, you can understand how it behaves. But when it comes to AI, it’s very difficult for policy-makers to understand they are dealing with something unpredictable. Concerns over AI are often dismissed as eccentric, and because we don’t understand the technology, it’s very difficult to explain it. You’ve said before that as soon as you get a highly superintelligent form of intelligence, humanity will be wiped out. For many, that’s a big jump. Why do you believe that?
EY: It’s not that anything smarter than us must, in principle, destroy us. The part I’m worried about is if you make a little mistake with something that’s much smarter than you, everyone dies rather than you get to go back and try again. These systems are more grown than built: we have no idea what goes on inside of them. If they get very, very powerful, they might not want to kill you but they do other things that kill you as a side effect, or they might calculate that they can get more of the galaxy for themselves if they stop humanity.
JP: Can you explain why it would do that? And how, even, could it?
EY: Well, if it’s much, much smarter than you, then it will likely get to the point of self-replicating. Yes, it might require humans to do a lot of computation. This will require fusion power plants in order to power it. If we build enough of these, we would start to be limited by heat dissipation before we run out of hydrogen in the oceans. So even if it doesn’t specifically go out of its way to kill us, we could die when the oceans boil off and Earth’s surface gets too hot to live on because of the amount of power that’s been generated. Now, it could go out of its way to save you if it wanted to do that but it’s not likely to. We don’t know how to make it want to save us.
Read more here.
If you’re willing to fight for Main Street America, click here to sign up for the Richardcyoung.com free weekly email.