One Book A Day

Human Compatible by Stuart Russell: Summary and Notes 

One sentence summary: Human Compatible Steve Jobs: The Exclusive Biography by Stuart Russell answers many of the fundamental questions in AI research and application.

One Paragraph summary: Stuart Russell’s second book on AI goes deep into the fundamental issues surrounding the kind of superintelligent AI that scares humans out of their wits. Even if you don’t know what goes into AI research, you will gain a lot from reading the book. You will at least appreciate the dangers and the benefits of AI research and application and what it means for the future of humanity.

Favorite quote from the author:

“This ability of a single box to carry out any process that you can imagine is called universality, a concept first introduced by Alan Turing in 1936.31 Universality means that we do not need separate machines for arithmetic, machine translation, chess, speech understanding, or animation: one machine does it all.”

Recently, I have found myself drawn to a list of favorite books by Elon Musk. Elon loves to read and is re-known for concorting ideas from different books into something tangible. He also very influential and rich, so when you come across a legit list that captures his reading interests, there is cause to pause and pick up a book he recommends

In this case, it was Human Compatible: Artificial Intelligence and the Problem of Control by the famed computer scientist Stuart Russell. Like Elon, Russell is concerned about the potential dangers of artificial intelligence (AI), and frankly, who isn't? We grew up watching movies like the. Terminator and even if you are not imbued in Skynet lore, you must have watched at least a dozen or so apocalyptic films where robots take over the world and threaten to kill everyone. Eagle Eye, iRobot, and Extinction are just a few examples. The point is we are all scared out of our wits.

So what’s the big deal with AI? I’m not an expert at answering that question, and neither is Elon that’s why we have to depend on people like Stuart Russell to provide the answers for us. Long story short, Russell says that the current approach to AI development is very risky. Humanity might create a super AI that doesn’t care about us and which will see us as threats worthy of elimination. Russell’s book offers solutions to some of the challenges posed by AI while introducing the reader to many philosophical issues at play.

What else is there to love about Human Compatible? The book is written for the average Joe and, if you insist, Jane too. It will not bore you with ceaseless computer science jargon. Russell will hit you with fast-flowing facts, ethical concerns, and perspectives. His words are not just concerning; they will also mesmerize you.

Main Takeaways from Steve Jobs: The Exclusive Biography by Walter Isaacson

  1. Curiosity has its rewardsDon’t be afraid of narrow AI

  2. We need to be prepared

  3. Three principles of superintelligent machines

  4. It is unlikely that we can turn off a superintelligent AI

  5. We can merge to survive

Lesson 1: Don’t be afraid of narrow AI

There is all the rage that superintelligent AI will take over the world, steal our jobs, and enslave us. Russell says that while there is some truth to that, people are often confused about what superintelligence AI is. A superintelligent AI, or what computer scientists called general artificial intelligence (AGI), is an entity that can solve problems in many general areas. Most AI systems of today are what’s called narrow AI. They can operate machines, classify images, play games, etc.., but no one entity can simultaneously do these things. For instance, Google’s AplhaZero can play chess incredibly well, but it cannot drive cars because that requires another level of sophistication that AlphaZero is yet to attain.

Russell takes the reader through a history class showing how every time general AI was predicted, it didn’t happen. In the 50s, when computers started emerging, the prediction was that general AI would be achieved within a few decades. The same forecast held in the 80s and so on. Russell makes the case that every generation realizes just how difficult it is to replicate human intelligence. It requires faster machines, more groundbreaking work in math, physics, computational logic, and so on. But that is not to say that there has been no progress, machines are getting increasingly intelligent, and what we have to today are narrow AIs that can indeed cause problems at some scale but not at the kind that Skynet is capable of. So don’t be afraid, it is not yet time

Lesson 2: We need to be prepared

Just because the threat of superintelligent AI is not imminent doesn’t mean that we should not be prepared for it. While making this point, Russell points out that it might take anywhere from a hundred years to a couple of years to achieve that goal because progress is happening everywhere. At the same time, it is hard to predict human ingenuity to begin with. And like he argues, despite what naysayers have been saying for decades, general AI will eventually happen, and if it does, we need it on our side and not against us.

What is required is an AI trained to value human preference, and despite sounding simple and obvious, training an AI to understand and revere humans is incredibly difficult. To give you an example, not all humans want the same thing, cultures clash, and some people are downright evil. An AGI is also likely to have its own goals because any entity wants to model the world after its beliefs at that level of intelligence. Thus, the question is are we ready to be passive beings in the near future?

Also, there is the possibility that as robots take up more and more human tasks, we shall become docile, fat, and passive. Would you want such a future? I certainly wouldn’t. And then, there is the concern that government-led efforts to create superintelligence will lead to a dangerous escalation cycle that might harm everyone. We already see that with the American, Chinese, and the Russians — they are all in a race to create killer machines.

Lesson 3: Three Principles of Superintelligent machines

I don’t know where the obsession of creating three laws for superintelligent machines comes from, but ever since Isaac Asminov came up with the three laws of robotics, everyone else is at it, including good old Stuart Russell. Russell proposes three laws that, if applied, will safeguard us from the terrible consequences of AI.

  • Machines must serve human preference Broadly speaking, AI must serve human interests and not the interests of anything else. Achieving this can be tricky because human needs and preferences are complex and varied. Russell implies that creating a human first AI will create a safe boundary for AI behavior and expectations.

  • Keep the goal uncertain Russell says that it is essential to keep a super-intelligent AI uncertain because it will always defer to the human when the goal is uncertain. That way, we can ensure a symbiotic relationship between the two.

  • Machines should predict human behaviorBy observing human behaviors, machines should predict human preferences and shape their actions and reactions accordingly. This is needed because human behavior is complex and sometimes contradictory.

Lesson 4: It is unlikely that we can turn off a superintelligent AI

Can’t we just turn it off? You will often find this question as part of many discussions on AI, but as Russell implies, the answers are not simple. You cannot simply turn off a superintelligent AI. Why? First, any intelligent machine can easily reason that being turned off is against its own interests. Secondly, such a machine can create copies of itself and distribute them on a blockchain ledger making it impossible to ever switch it off.

One of the solutions proposed for turning off a superintelligent AI is to confine it in a space where it cannot access the internet or other computer networks. But I think that’s impossible because no one will start by building a superintelligent AI on the go. It is a gradual process, and before humans are even aware of what’s going on, it is likely that the machine will already have some level of access to the internet.

Lesson 5: We can merge to survive

Elon Musk is a big fan of merging with AI and even has a company called Neuralink that recently demostrated a monkey moving things with its brain. Merging is one way to avoid a catastrophic end for our species because when we merge with AI, we become one and the same.

One of the dangers of merging is that we shall lose our humanity in the process because progressively, we shall become more machine than human. We might even end up with machine parts like we see in some sci-fi movies.

There is also the concern that the future will have multiple superintelligent entities each with its own goals. Thus, while we might merge with say Elon Musk’s Neuralink other superintelligent AIs might not be so friendly. Still, it is interesting waiting to see what will happen.

Wrap Up

Stuart Russell’s Human Compatible primarily focuses on answering whether a superintelligent AI can safely coexist with humans. He does a decent job of demonstrating how this is possible and why that matters. He also gives a stark warning — whether we like it or not, general AI is coming, so we should prepare ahead of time.

Who Would I recommend the Book To?

If you care for the future of humanity, this one of the books that you should read. It was the 2019 best book on AI.


Rate this book!

This book has an average rating of 5 based on 2 votes.