Computational irreducibility is like prime numbers in a sense, right? So as long as it has pockets of reducibility, it is not the fundamentally irreducible thing? It's not the universe that's computationally irreducible, it's . . .
It's the processes that go on. OK, so what is the universe? Is the universe the underlying code from which you can generate the universe? Or is it these dynamic processes that are going on inside the universe today? Or is it just one slice of those dynamic processes? This is the universe as it is today, whatever that means. What computational irreducibility talks about is how much information—if you want to predict what the universe is going to do, if you want to predict some aspect of what the universe is going to do, then you have to go from that underlying rule. You actually have to run it and see what the universe does. So, for instance, one of the types of things is, you might say, Is warp drive possible? And you might say, well, gosh, if you have the underlying theory of the universe, you should be able to answer whether warp drive is possible, but probably it isn't easy to answer. Probably that will be one of these questions for which it's effectively undecidable, because what you'll be reduced to from a mathematical point of view is to say, Does there exist some configuration of material which has this property and that property and that property given these underlying rules for how things can be set up? And that can be an arbitrarily difficult question to answer. And that's an example of what it means for there to be computational irreducibility. The thing with computational irreducibility is, what it tells you is that in order to find the outcome of some process, you have to follow through some number of steps. And that you can't always arbitrarily reduce the amount of computational effort that's needed.
There are no shortcuts.
Right. One feature of that is if you ask a question like, Can such-and-such a thing ever happen even after arbitrarily long times?, that's a questions that, if there is computational irreducibility, you may not be able to answer in a finite way. If there was computational reducibility, then the fact that one's asking about arbitrarily long times shouldn't scare one, because even a thing that takes an arbitrarily long time one can reduce down to something that only takes some given, finite time. But if it's the case that there's computational irreducibility, then you can't expect to always do that reduction. If you're asking a question about what happens after arbitrarily long, it actually takes you arbitrarily long to answer that, and that's the origin of the phenomenon of undecidability that shows up in mathematics and Gödel's theorem and so on, and it's something which when applied to physics leads to this consequence that even if you know the underlying theory, you might not be able to work out what is technologically possible in that universe.
I think that most people would assume that if you know the underlying theory, you know all of the rules that govern the universe. And what you're saying is that that is not necessarily true, and to actually know what the rules are, you have to run the universe.
And that's the same fallacy, basically, as when people portray robots that act according to logic, they always portray them—in early science fiction, the fact that the robot had underlying rules meant that its behavior was in some way fundamentally simplistic. That's sort of the same fallacy, that if you know the underlying rules and the underlying rules are simple, then, gosh, you must be able to tell that, because there can't be an irreducible distance between the underlying rules and the actual overall behavior. In my efforts in basic science, one of the number-one observations was, if you look in a computational universe of possible programs, an awful lot have this property that even though the program is simple, the behavior is immensely complicated. When we do technology and when we create programs, most of the time we're trying to avoid the programs where the behavior is arbitrarily complicated. Those are the programs that work in some way that we can't possibly understand and it's full of bugs. We tend to aim in our current technology for things where the behavior is simple enough that we can readily see what its consequences will be. It turns out, I think, that one of the big things that will happen in technology—we can already see happening in the coming decades—is more and more technology will be found by searching the computational universe of possible programs, possible algorithms, possible structures, whatever, and we will be able to know that it performs some function for us, but if we look at how it does it, it will look very, very complicated and will not be something that we can readily predict. For example, when we build programs now in Mathematica and Wolfram Alpha, lots of those algorithms are found by algorithm discovery, where basically we're searching a billion different possible algorithms of a particular kind and finding the most efficient one that achieves some particular objective. When you look at that algorithm, you say, What is this doing? Sometimes we can understand it and we say, gosh, that's really quite clever of it. And sometimes you say, gosh, I can't be bothered to figure this out; this is way too complicated to figure out what it's actually doing, but yet we can see that it's doing the thing that is useful to us.
You can't figure out how it's doing it, but you know what it's doing.
You can see it's twiddling these bits in this way and, gosh, they always line up in this way at the end. And one can automatically prove that some particular property will always be the case, and one would never, as a human, trying to write the code, one would never have arrived at this kind of thing. It works in a way that is just utterly alien to a human who's used to creating code that does its thing iteratively, in a very organized way. When you look at it, it's like, wow, it's working, but it's working in this very complicated way. Now, we see examples of this quite often in nature, in biology, in physics, in other places—natural selection. Actually, evolution is a funny one, because evolution is actually closer to technology than one might think, because evolution has a hard time working on things that are really complicated. It's much better at, Well, let's just extend this bone a bit and see what happens, rather than—it's actually quite rare for evolution to go out and do something that is truly innovative. It's usually doing things incrementally in a way that's similar to a lot of engineering that we do.
Sometimes you just have to try it, and then you will know.
How many tries will it take for a robot to do a kickflip?
Wolfram Alpha says:
Let's see it happen!