Reflections on AI

Lex Fridman interview with Max Tegmark

Apr 20, 2023 Source

I wish I had written this summary sooner after listening to this podcast, because now I’ve forgotten most of it. (I listened to it about a week ago, when it first came out.)

At a very high level: Max Tegmark is an AI researcher at MIT, the co-founder of the Future of Life Institute, and author of the book Life 3.0: Being Human in the Age of Artificial Intelligence. He is also one of the signatories of the open letter calling for a pause on large-scale AI experiments (one of the three approaches to AI safety).

Tegmark shares Yudkowsky’s concern that AGI will very possibly lead to humanity’s extinction. He does seem a bit less pessimistic than Yudkowsky, though. Something about Tegmark’s point of view that feels more right to me than Yudkowsky’s view is that he puts more emphasis on scenarios where AI kills humanity not out of malice or any direct intention at all, but rather as a side effect of pursuing some goal in a way that makes our world uninhabitable to us (e.g. by raising CO2 levels to a point that is lethal to humans). To be clear, Yudkowsky also doesn’t seem to think that AI will want to kill us out of anything like hatred for humanity, but rather as a mere necessary step in some sequence of actions that leads to the outcome the AI wants.

Perhaps I have an easier time grasping Tegmark’s way of thinking about the risk because it feels more familiar. Seldom do humans think (to my knowledge, anyway) like this: “We want to populate this area, but it is inhabited by bears. Therefore we must eradicate all of the bears, and then we can settle.” Rather, we tend to think more like this: “We want to populate this area. Let’s do it.” Then along the way we encounter some bears, and kill some of them directly; but mostly we drive them away and cause most of their deaths indirectly simply by destroying their habitat, cutting off their food sources, etc.

Two more things I remember about Fridman’s conversation with Tegmark.

First, Tegmark referred to GPT-4 and the most advanced AI systems we have today as “Baby AI”. I think this is a helpful metaphor, because it makes it easy for people to understand why it’s rational to fear where AI might lead even if the capabilities of GPT-4 et al don’t seem too scary at the moment. I.e. we all understand that a baby lion or a baby crocodile might not be particularly dangerous in its current form; but we also understand that these creatures grow into something far more dangerous and so it’s unwise to just raise lions or crocodiles without putting any thought to how we will contain them as they grow bigger.

The second point I remember him making that really stuck with me was an analogy he made about flying machines.

People spent a lot of time wondering about “How do birds fly?” And that turned out to be really hard. […] It turned out there was a much easier way to fly. Evolution picked a more complicated one because it had its hands tied: it could only build a machine that could assemble itself. It could only build a machine using the most common atoms in the periodic table. It had to be able to repair itself and it also had to be incredibly fuel-efficient. For humans… put a little more fuel in it, there you go.

Tegmark’s point was that, when we look at the sophistication of the human brain, and assume that making an intelligent machine must be incredibly hard because look how complicated brains are, we could be making the same mistake that early inventors made trying to create flying machines. Maybe you don’t need to replicate the machinery of a human brain because you don’t have the same constraints that evolution needed to overcome to produce the human brain. It may well turn out we can do it with just lots and lots of metal.

Wow, I almost forgot all about Moloch. This is probably the biggest theme of the entire conversation: repeatedly Tegmark referred to “Moloch” as the true enemy we face today: an abstraction representing the force that causes humans to behave in ways that are collectively destructive even when no individual human actually wants to. He said that most of the CEOs at the tech companies who are developing AI fully understand the risks and would prefer to pause or slow down development to give alignment research some time to catch up… but because we are notoriously terrible at large-scale coordination, every CEO feels immense pressure to race ahead or else be overtaken by someone else.

He said it’s like we’re all racing at high speed towards a cliff, even though those in front know we’ll all die from this, because it just so happens there are riches and rewards right at the edge and everyone so desperately wants to get there first that they’re willing to accelerate to such a high speed it will be impossible to stop in time.

Moloch is tricking us into do this. This is how Tegmark explained his rationale for signing the open letter: it’s about giving AI labs the external pressure they need to slow down and put more focus on safety. It’s meant to give them a weapon against Moloch.