Yudowsky + Wolfram on Law

Eliezer Yudowsky and Stephan Wolfram where debating AI safety when they went down a rabbit hole of law, governance systems, and decidability…

Stephen Wolfram: [2:20:48] Right? I mean, so the fact that there are unexpected things that happen. Both, by the way… You know, one global thing I might say: I think there is an important societal choice, which I think maybe is what you’re getting at at some level, which is between, “Do we want computationally irreducible things in the world?” Or do we want to force everything to be computationally reproducible? So what do I mean by that? I mean—

Eliezer Yudkowsky: [2:21:14] Making a government where computationally reproducible would be a start, but maybe I shouldn’t have… I’ve diverted to politics that way.

Stephen Wolfram: [2:21:20] Well, the governments are like machines. So that, you know, they’re like, they’re like our computers: you give them certain rules and then they operate according to those rules.

Eliezer Yudkowsky: [2:21:27] But if they had fewer rules and the rules were more understandable, it would probably be a more legible society. It’s not the dystopia I’m worried about, but you could sure tell a story about a dystopia where you’ve got, like, large language models executing all of the rules. And, you know, no human… There’s, like, they can actually apply all the rules. And no human even knows what all the rules are. Well, already nobody can read all the rules, but now they’re actually being applied.

[…]

Stephen Wolfram: [2:26:47] I suspect that it is impossible for law to be computationally reproducible. In the same way, to be a bit more technical, that if you are, you know, doing math and you’re saying, “I’ve got these axioms. I want to have the integers and nothing but the integers.” Right? We know that there’s no finite set of axioms that gives you the integers and nothing but the integers.

Eliezer Yudkowsky: [2:27:07] I mean, if we’re, if we’re admitting second‐order logic is meaningful at all, but yes.

Stephen Wolfram: [2:27:12] Well, the… Right. But we’re, we’re saying that, that… Without hyper-computation, you can’t kind of swoop in. You know, if, if we’re just using sort of standard, you know, we’re just saying we’ve got these axioms, x plus y equals y plus x, et cetera, et cetera, et cetera. Let us sculpt the world with those axioms so that they allow only the integers and nothing but the integers. I claim that’s very similar to saying, “Let’s have a system of laws that allows only these things to happen and not others.”

Eliezer Yudkowsky: [2:27:42] I mean, that’s not the purpose of the law. The purpose of the law is to interact and is, is to do predictable things when I interact with it. Like the doctrine of… I’m not going to process this correctly. [hesitation] Stare decisis in courts, where they try to repeat the previous court’s decision. It’s not that they think the, the previous court is as wise as pos‐ sible. They’re trying to be predictable to people who need to navigate the legal system. That’s the foundational idea behind previous courts respecting past [decisions]. It’s not that the past court is optimal. It’s that if the past court didn’t really screw up, we’d like to just repeat its decision forever, that the system is more navigable to the people inside it. And my, so my transhumanist politics says that, you know, like maybe among the reasons why you don’t want super intelligent laws is that the job of the laws is not to optimize your life as hard as possible, but to provide a predictable environment in which you can unpredictably optimize your own life and interact with other unpredictable people while predictably not getting killed.

Stephen Wolfram: [2:28:38] Right. I think, I think the point is, you know, what you’re saying is, to, for us to lead our lives, it is, you know, the way we lead our lives, we need some amount of predictability.

2 Likes