One Book A Day

Life 3.0 by Max Tegmark: Summary and Notes 


One sentence summary: Max Tegmark does an excellent job of laying out the philosophical issues around artificial intelligence. His work screams safety, safety, safety.

One Paragraph summary: Life 3.0 by Max Tegmark discusses, among other things, what it means to be alive in the most interesting time in human history. His focus is on artificial intelligence and the impact it will have on everything and everyone. The verdict? It sure is exciting, but we need to be careful while we are it.

Favorite quote from the author:

“The world’s best computers can now out-remember any biological system — at a cost that’s rapidly dropping and was a few thousand dollars in 2016.”

Max Tegmark is an AI researcher, MIT professor, co-founder of the Future of Life Institute, and buddies to Elon Musk, Sergey Brin, and Larry Page. Like many machine learning researchers, Max is concerned about the future of humanity in the face of the increasing sophistication of AI. In particular, he doesn’t want to see the development of a malevolent AI whose goals are misaligned with those of humans. He implies that the next 100 years may be the most important in humanity’s history, so there is a need to be extra vigilant.

What I liked about the book was the following argument: We should not develop AI without building ethics into it because we don’t know how long it takes to develop ethical AI in the first place. In fact, his book starts by describing a fictional AI that has so much power that it can rise to the level of world government within a short period.

How is it able to do this? Max explains it this way; intelligence is a form of control. We can tame tigers not because we are stronger but because we are more intelligent. Similarly, a super-intelligent AI will exert control over humans simply because it is more intelligent. The point of all of this is that we should prepare for what’s coming.

The book has many more lessons, and I would love to share them with you guys.

Main takeaways from Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

  1. The three stages of life

  2. No one knows what will happen when a super-intelligent AI arrives

  3. The common myths surrounding AI

  4. Computation is substrate-independent in the same way that information is

  5. Intelligence implies the ability to achieve complex goals

  6. In the near term, AI will significantly enhance our lives

Lesson 1: The three stages of life

Max has an interesting take on the evolution of life in the universe. He says that it goes through three stages:

  • Stage 1: Life 1.0 Life 1.0 is a purely biological life form that depends on its DNA for evolution. All behaviors hardcoded in the DNA. To change, the organism must evolve for generations.

  • Stage 2: Life 2.0Life 2.0 refers to organisms that can change their software and habits. Humans are examples of Life 2.0 because they can acquire language and skills. But at the same time, they can’t change their hardware without evolving.

  • Stage 3. Life 3.0Life 3.0 refers to beings that can change their hardware and software. There is currently no life 3.0 on earth, but it might appear in the next 100 or so years in the form of general artificial intelligence (AGI).

Lesson 2: None one knows what will happen when a super-intelligent AI arrives

There is so much controversy around AI because no one knows what will happen and what it will mean for humanity. There is only speculation, but if Max is to be believed, there is nothing to stop the coming age of AI. Things are happening fast, and humanity only has a hundred years or so to prepare.

The period that we are living in is perhaps the most important in human history. It will determine whether humans live to see the next hundred years or not. The danger is that some folks might create an AGI whose goals are misaligned with that of humans. In that case, it might see humans as obstacles. Max uses the interesting analogy of ants to drive the point. You probably step on several ants as you walk, not because you hate ants but because you can’t help it or don’t think much about it. An AGI might see humans in the same way, as trivial beings that shouldn’t stand in the way of higher goals.

While the most significant risk to humanity comes from an AGI, there is also a lot to be said about narrow AI. Narrow AI refers to artificial intelligence specialized in a specific area like self-driving. If say, such an AI had a glitch, it would cause havoc in major highways. Max also warns against the risks of such AI.

Lesson 3: The common myths surrounding AI

  • Myth 1: Superintelligence by 2100 is inevitable The truth is, no one knows when we shall get an AGI, it might be in a few decades, or it might be in 100 years. What we do know is that there is nothing in the laws of physics to prevent us from developing one.

  • Myth 2: Only Luddites worry about AIEveryone is worried about AI, from governments, AI researchers to the average Joe. Even if you just bake cakes for a living, there is the possibility that some clever robot will take your job in the future.

  • Myth 3: AI turning evilAs Max points out, the real danger is not AI becoming evil but turning competent with goals that are misaligned with ours. Evil is usually an outcome, not a design, and there are many ways the goals of AI may be different from ours. And I’m guessing here, a superintelligent AI might come to see democracy as not the best form of government and try to overthrow governments.

  • Myth 4: Robots are the main concernA superintelligent entity doesn’t need a body to cause chaos, just an internet connection. It can manipulate the market, hire minions, run empires and so on. The threat to humanity is an intelligent form that we can barely control, let alone understand.

  • Myth 5: Machines can’t have goals A heat-seeking missile has a goal. Goals are what something is designed to do.

  • Myth 6: Superintelligence is decades or hundreds of years away, so there is no need to worryThe real concern is that it might also take hundreds of years to make AI safe that’s why we need to plan ahead.

Lesson 4: Computation is substrate-independent in the same way that information is

The fact that computation is substrate-independent has to be the most fascinating concept in the book. Substrate independence means that a super-intelligent AI can function on any computer as long as the device is physically capable of handling the computation. It can work on a Mac, Windows or an Android phone simply because computation is possible on different platforms. Heck, the superintelligence would not even know the kind of transistors the microprocessors are using in the same way we have no clue how our brains operate. The point is, the hardware only comes into the picture if it can handle the computation.

Substrate-independence implies that if a computer can produce the same results as the human brain, there is no reason to think that it cannot be as intelligent.

Max puts it excellently when he writes:

“We’ve now arrived at an answer to our opening question about how tangible physical stuff can give rise to something that feels as intangible, abstract and ethereal as intelligence: it feels so non-physical because it’s substrateindependent, taking on a life of its own that doesn’t depend on or reflect the physical details. In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter.

In other words, the hardware is the matter and the software is the pattern. This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms.”

Lesson 5: Intelligence implies the ability to achieve complex goals

A single IQ can’t measure intelligence because intelligence is the ability to achieve broad goals. In today’s world, AI is very good at achieving narrow goals like playing chess, while human intelligence is remarkably broad. As earlier noted, intelligence is substrate-independent. It can be built upon any chunk of matter as long as the chunk has a stable state — for memory — and for computation as long as it has universal building blocks that can combine to implement any function.

At least for the moment, what makes AGI impossible is because humans have not created functions that optimize learning the same way that humans do. With the ever-growing sophistication of neural networks and new discoveries in mathematics, physics, and computer science, it is only a matter of time before we get there.

Lesson 6: In the near-term, AI will significantly improve our lives

In the near-term, AI will greatly enhance our lives. It will revolutionize medicine, transport, education, and contribute to breakthroughs in maths and science. But while the benefits will outweigh the risks in this period, there are still many dangers. One that is particularly concerning is the rise of autonomous killing machines. Max calls for an international treaty similar to the nuclear non-proliferation treaty to regulate such machines and technologies. The worry is that without one, nations will engage in an arms race, and everyone will lose in the end.

Wrap Up

I expected Life 3.0: Being Human in the Age of Artificial Intelligence to be full of technical jargon, but it was surprisingly easy to read, and I think I know why. Max was writing for folks like me and you because, as I have learned, everyone needs to be in the know when it comes to AI. Artificial intelligence is, after all, the most significant step in our evolution as humans.

A good complement to this book is Superintelligence by Nick Bostrom. It offers a little more in-depth look at the same issues.

Who Would I recommend the Book To?

The futurist who wants to know what the next step in human evolution is going to be.

GET THE BOOK ON AMAZON

Rate this book!

This book has an average rating of 5 based on 2 votes.