One Book A Day

Super Intelligence by Nick Bostrom: Summary and Notes 


One sentence summary: To get a sense of the clear and present danger posed by Artificial intelligence, read Super intelligence by Nick Bostrom.

One Paragraph summary: Nick Bostrom masterfully introduces the reader to the dangers of a super intelligence. He believes that if little is done in the way of preparation, the human race may be living the last of its days. Super intelligence also offers tips and strategies on how we might ward off a malevolent AI.

Favorite quote from the author:

“Some little idiot is bound to press the ignite button just to see what happens.”

Super intelligence by Nick Bostrom is a vast and well-researched book by one of the world’s top brains on AI. The book is a staple for many university courses on AI ethics, and while Nick sometimes writes like an academic, the book is still accessible to the general public. Nick ensures it is because he is convinced the rise of artificial intelligence is a terribly important moment in human history. It may be the cause of our extinction, or we may yet thrive. The book’s introduction is incredibly fascinating. A group of sparrows decides on adopting an owl for protection, but a wise one amongst them asks, “shouldn’t we first learn how to bring up owls?”. It is the same with Super intelligence. We want it badly. It will create jobs, it will improve our defenses, our science and mathematics will benefit too but are we ready for it? Nick has more worries than answers.

The main takeaways from Super intelligence by Nick Bostrom

  1. The fate of our species will depend on any being that’s more intelligent than us

  2. The paths to superintelligence

  3. Forms of superintelligence

  4. Explosive intelligence will change the world like nothing before

  5. A superintelligence will have the instrumental goal of using unlimited resources

  6. Outside special effort, a superintelligence may not share human values at all

Lesson 1: The fate of our species will depend on any being that’s more intelligent than us

The more I read about the dangers of artificial general intelligence (AGI), the more I encounter the same argument; if we share the universe with a more intelligent entity, it will eventually dominate us economically and politically. The reason is simple. Intelligence implies a better way of doing things, and given that we shall be competing in the same space with an entity of unfathomable capabilities, we shall stand no chance.

A good analogy to make here is that of the critically endangered mountain gorillas of East Africa. The fate of their species depends on humans in the same way the fate of our species will depend on the benevolence of an AGI. But can we trust an AGI? The problem gets compounded by the possibility of many different kinds of artificial intelligences. There are many groups of people, organizations, and even nations trying to be the first to crack AI. Each could deliver a wildly different result from every other because, unlike human intelligence, the space of possible artificial intelligence is many magnitudes wider.

A good example — and that you will often find in many discussions on AI safety— is a stamp collecting machine. If you have an AGI with a goal of collecting as many stamps as possible, it may — depending on its programming — cut every tree on earth, and even strip us of the atoms in on our bodies and rearrange them to make stamps. As Nick points out, the danger is that as long as an AGI has a goal to achieve, it poses a threat to humans depending on how it interprets its objectives.

Lesson 2: The path to superintelligence

After convincing the reader that superintelligence is inevitable by around 2100, Nick explores the various paths to getting there. The first and perhaps the most obvious is to simulate the genetic and evolutionary processes that led to the emergence of human intelligence. The main drawback of this is that it is very computationally expensive to simulate any aspect of evolution, let alone the evolution of the human brain.

Another way of creating artificial intelligence is to use the brain as a template. There are different ways of going about this. You might have heard of neural networks modeled after the very networks that run on our brains, and they are doing a great job for Google with their machine learning effort. There is also the possibility of human brain emulation, which would involve recreating every bit of the human brain on a computer. Again that is computationally expensive and will depend on advances in neuroscience.

Alan Turing’s idea of growing a child AI is also explored. Turing proposed creating an AI that is capable of learning the same way a human child can. The problem with this idea is that it is incredibly difficult to develop learning algorithms that mimic the brain. There is also the possibility of enhancing biological cognition through bio-engineering and selective breeding, although this would be highly controversial.

Finally, there are brain-computer interfaces, the kinds of which are proposed by Elon Musk. Brain-computer interfaces would work with humans to enhance or complement their intelligence. Such interfaces might even be developed into networks that create a sort of superintelligence.

Lesson 3: Forms of superintelligence

Nick identifies three forms of superintelligence:

  • Speed superintelligence
    This refers to a system that can do all that a human can but at incredible speeds. Human brain emulation is an example of speed intelligence since it would be just a human brain running on better hardware.

    Such a fast mind would experience the world in slow motion. Imagine it was 10,000 times faster, then everything would slow down at that rate. To it, something like a teacup drop would unfold over several hours “like a comet, silently gliding through space toward an assignation with a far-off planet.” It might even read a few books before the cup hits the ground. But to the average human, it all happens instantly.

  • Collective superintelligence
    A collective superintelligence is an aggregation of many small intelligent units. Examples of collective superintelligence include teams, firms, and humanity as a whole. In AI, superintelligence might comprise many narrow AI’s, each good at a particular task.

  • Quality superintelligence
    Nick defines quality superintelligence as intelligence superior to that of humans in the same way human intelligence is superior to that of the chimpanzee.

    All these forms of superintelligence have a distinct advantage over human intelligence because the digital substrate is better than the human substrate. Computer hardware can enable ever more powerful machines, but the human brain is limited in this sense.

Lesson 4: Explosive intelligence will change the world like nothing before

The last invention by humanity will be superintelligence. Yes, you heard that right, and even Alan Turing said it many years back. Once we create a superintelligent entity, it will be capable of producing even better versions of itself. Nick compares the impact of such an entity with the agricultural and industrial revolutions. The agrarian revolution made it such that the world economy doubled every 909 years and the industrial revolution every 6.6 years. If we were to attain singularity — the explosion of intelligence — the world economy would double every two weeks!

But we don’t have to get to singularity to enjoy the incredible benefits of AI. We are already benefiting from cutting-edge technologies that are having an impact on every sphere of the economy. They are improving transportation, agriculture, medicine, manufacturing, software development, and so on. Indeed many analysts say that we are on the verge of an intelligence explosion or the fourth industrial revolution.

Lesson 5: A superintelligence will have the instrumental goal of using unlimited resources

Any superintelligent entity is likely to have the instrumental goal of using unlimited resources as a way of attaining its primary goal, whatever the goal is. Such a being would likely use every available resource on earth to create more computing power or send copies of itself into outer space, where it would harvest asteroids and planets for even more resources. Nick says that if such a being were on our side, it would emulate billions of human minds and send them all over the galaxy to colonize what's called the Hubble bubble or the area within our galaxy that’s reachable before the expansion of the universe makes it impossible to get to those areas. That area contains trillions of stars.

As you can imagine, there are many ways that a super-intelligent entity hell-bent on colonizing its locality can come into conflict with humans. The obvious example here is that humans are after the same resources. An AI can also compute that humans are not worth it.

In Human Compatible, another excellent book on the dangers of AI, the author makes an analogy that you might find familiar. When we step on ants, we don't do so because we hate them. Their worth doesn't compute on a meaningful scale from our perspective and the same can happen to us.

Lesson 6: Outside special effort, a superintelligence may not share human values at all

Nick makes a very important point when discussing the emergence of the first AGI: its primary goal may be simple like the stamp collecting machine because for programmers, it is easier to create such a machine than to code one with human values. And that's the danger we face because such an AI is likely to have a first-mover advantage over other benevolent AI versions.

The main point that Nick drives is that it is important to be careful at every point of development of AI, and researchers need to be keenly aware of the dangers that AI poses. Because one the agent is here, stopping it might be next to impossible.Human Compatible, Human Compatible also sends a similar message.

Wrap Up

Superintelligence by Nick Bostrom is a very rich text. This summary does little justice to it except to highlight the major themes surrounding AI research and development. The book is alarmist, just like Human Compatible, because if we don’t get it right with AI, our generation may be the last to walk the earth.

Who Would I recommend the Book To?

Nick Bostrom writes for humanity. Are you part of humanity? Then you should buy this book.

GET THE BOOK ON AMAZON

Rate this book!

This book has an average rating of 5 based on 2 votes.