« Avoid Vena Cava Filters | Main | BHTV: Yudkowsky / Wilkinson »

January 25, 2009

Comments

It occurred to me at some point that Fun Theory isn't just the correct reply to Theodicy; it's also a critical component of any religious theodicy program. And one of the few ways I could conceive of someone providing major evidence of God's existence.

That is, I'm fairly confident that there is no god. But if I worked out a fairly complete version of Fun Theory, and it turned out that this really was the best of all possible worlds, I might have to change my mind.

Unfortunately, it seems to me that moral anti-realism and axiological anti-realism place limits on our ability to "optimize" the universe.

To put the argument in simple terms:

1. Axiological/Moral anti-realism states that there are no categorically good states of the universe. On this we agree. The goodness of states of the universe is contingent upon the desires and values of those who ask the question; in this case us.

2. Human minds can only store a finite amount of information in our preferences. Humans who have spent more time developing their character beyond the evolutionarily programmed desires [food, sex, friendship, etc] will fare slightly better than those who haven't, i.e. their preferences will be more complicated. But probably not by very much, information theoretically speaking. The amount of information your preferences can absorb by reading books, by having life experiences, etc is probably small compared to the information implicit in just being human.

3. The size of the mutually agreed preferences of any group of humans will typically be smaller than the preferences of any one human. Hence it is not surprising that in the recent article on "Failed Utopia 4-2" there was a lot of disagreement regarding the goodness of this world.

4. The world that we currently live in here in the US/UK/EU fails to fulfill a lot of the base preferences that are common to all humans, with notable examples being the dissatisfaction with the opposite sex, boring jobs, depression, aging, etc, etc...

5. If one optimized over these unfulfilled preferences, one would get something that resembled - for most people - a low grade utopia that looked approximately like Banks' Culture. This low grade utopia would probably only be a small amount of information away from the world we see today. Not that it isn't worth doing, of course!


This explains a lot of things. For example, the change of name of the WTA from "transhumanist" to "humanity plus". Humanity plus is code for "low grade utopia for all". "Transhumanist" is code for futures that various oddball individuals envisage in which they (somehow) optimize themselves way beyond the usual human preference set. These two futures are eminently compatible - we can have them both, but most people show no interest in the second set of possibilities. It will be interesting to think about the continuum between these two goals. It's also interesting to wonder whether the goals of "radical" transhumanists might be a little self-contradictory. With a limited human brain, you can (as a matter of physical fact) only entertain thoughts that constrain the future to a limited degree. Even with all technological obstacles out of the way, our imaginations might place a hard limit on how good a future we can try to build for ourselves. Anyone who tries to exceed this limit will end up (somehow) absorbing noise from their environment and incorporating it into their preferences. Not that I have anything against this - it is how we got our preferences in the first place - though it is not a strong motivator for me to fantasize about spending eternity fulfilling preferences that I don't have yet and which I will generate at random at some point in the future when I realize that my extant preferences have "run out of juice".

This, I fear, is a serious torpedo in the side of the transhumanist ideal. I eagerly await somebody proving me wrong here...

Roko, preferences are not flat, they depend and act on the state of the world in general and on themselves in particular. They can grow very detailed, and include a state quite remote from current world as desirable. The problem with derailed aspect of transhumanism in not in remoteness from currently human, but in mistaken preferences arrived at mostly by blind leaps of imagination. We define the preferences over remote future implicitly, without being able to imagine it, only gradually becoming able to actually implement them, preserving or refining the preference through growth.

I response to my own question: I think that the information difference between innate biological prefs that we have and explicitly stated preferences is a lot bigger than I thought.

For example, I can state the following:

(1) I wish to be smart enough to understand all human science and mathematics published to this date, and to solve all outstanding scientific and philosophical questions including intelligence, free will and ethics. I want to know the contents and meaning of every major literary work in print and every major film, to understand the history of every major civilization, to fall in love with the person who is most compatible with me in the world.

Now if I make all these wishes, how much have I cut down future states of the universe? How much optimizing power in bits have I wished for?

I expressed the wish in about 330 characters, which according to Shannon means I have expressed 330 bits of information, roughly equivalent to specifying the state of a 20X20 grid of pixels each one of which can be either on or off. I feel that this is something of an underestimate in terms of how much I have cut down future states of the universe. Another way of calculating the complexity of the above wish is to bound it by the log of the number of psychologically distinguishable states of my mind. Given the FHI brain emulation roadmap, this upper bound could be a very large number indeed. Here is another ~300-char wish:

(2) I want to be as rich as Bill Gates. I want to have ten mansions, each with ten swimming pools and a hundred young, willing female virgins to cater to my every whim. I want my own private army and an opposing force who I will trounce in real combat every weekend. I want an 18-inch penis and muscles the size of Arnie in his younger days, and I want to be 6'7''. I want to be able to eat galaxy chocolate flavored ice cream all day without getting fat or getting bored with it. I want a car that goes at 5000 miles an hour without any handling problems or danger of accident, and I want to be able to drive it around the streets of my city and leave everyone in the dust.

Now it appears to me that this wish probably did only cut down the future by 300 bits... that it is a far less complex wish than the first one I gave. Presumably the difference between those who end up in low grade heaven and those who end up as superintelligent posthumans inhabiting a Dyson sphere, or having completely escaped from our physics lies in the difference between the former wish and the latter. Again, it is fruitful and IMO very important to explore the continuum between the two.

Roko, the Minimum Message Length of that wish would be MUCH greater if you weren't using information already built into English and our concepts.

I can certainly understand your dissatisfaction with medieval depictions of heaven. However, your description of fun theory reminds me of the Garden of Eden. i.e. in Genesis 1-2, God basically says:

"I've created the two of you, perfectly suited for one another physically and emotionally, although the differences will be a world to explore in itself. You're immortal and I've placed you in a beautiful garden, but now I'm going to tell you to go out and be fruitful and multiply and fill the earth and subdue it and have dominion over all living things; meaning build, create, procreate, invent, explore, and enjoy what I've created, which by the way is really really big and awesome. I'll always be here beside you, and you'll learn to live in perfect communion with me, for I have made you in my own image to love the process of creation as I do. But if you ever decide that you don't want that, and that you want to go it alone, rejecting my presence and very existence, then there's this fruit you can take and eat. But don't do it, because if you do, you will surely die."

It seems that the point of disagreement is that your utopia doesn't have an apple. The basic argument of theodicy is that Eden with the apple is better than Eden sans apple. To the extent that free will is good, a utopia must have an escape option.

Or, to put it another way, obedience to the good is a virtue. Obedience to the good without the physical possibility evil is a farce.

It's easy to look around and say, "How could a good God create THIS." But the real question is, "How could a good God create a world in which there is a non-zero probability of THIS."

Apparently having 72 virgins at your disposal is a utopia for many. EY should look into this...

but an eden with a reversible escape option is surely better than an eden with a non-reversible escape option yes?

@ Carl Shulman

yes, I am aware that human "concepts" are acting as a big multiplier on how much you can wish for in a small number of words. But I want to know whether certain wishes make better use ir worse use of this, and I want toget some idea of exactly how much more a human can feasibly wish for.

I think that by using established human concepts to make a wish ("I want to understand and solve all current scientific problems"), you are able to constrain the future more, but you have less understanding of what you'll actually get. You trade in some safety and get more mileage.

@ Nesov: "Roko, preferences are not flat..."

I don't understand quite what you're saying. Perhaps it would help if i attempt to make my own post a bit clearer.

@Roko:
As I understood, one of the points you made was about how preferences of both individual people and humanity as a whole are quite coarse-grained, and so strong optimization of environment is pointless. Beyond certain precision, the choices become arbitrary, and so continuing systematic optimization, forcing choices to be non-arbitrary from the updated perspective, basically consists in incorporating noise into preferences.

I reply that a formula for pi can be written down in much fewer bytes that it'd take to calculate 10000th digit in its decimal expansion. Human embodying a description of morality, just like note containing the formula for pi, can have no capacity for imagining (computing) some deeper property of that description, and still precisely determine that property. What we need in both cases is a way around the limitations of media presenting the description, without compromising its content.

@Vladimir:

Yes, you understood my message correctly, and condensed it rather well.

Now, what would it mean for human axiology to be like pi? A simple formula that unfolds into an "infinitely complex looking" pattern? Hmmm. This is an interesting intuition.

If we treat our current values as a program that will get run to infinity in the future, we may find that almost all of the future output of that program is determined by things that we don't really think of as being significant; for example, very small differences in the hormone levels in our brains when we first ask our wish granting machine for wishes.

I would only count those features of the future that are robust to very small perturbations in our psychological state to be truly the result of our prefs. On the other hand, features of the future that are entirely robust to our minds are also not the result of our prefs.

And still there is the question of what exactly this continued optimization would consist of. the 100th digit of pi makes almost no difference to its value as a number. Perhaps the hundredth day after the singularity will make almost no difference to what our lives are like in some suitable metric. Maybe it really will look like calculating the digits of pi: pointless after about digit number 10.

To satisfy the robustness criterion and this nonconvergence criterion seems hard.

If a computer program computes pi to 1,000,000 instead of 100 places, it doesn't make the result more dependent on thermal noise. You can run arbitrarily detailed abstract computations, without having the outcome depend on irrelevant noise. When you read a formula for pi from a note, differences in writing style don't change the result. AI should be only more robust.

Think of digits playing out in time, so that it's important to get each of them right at the right moment. Each later digit could be as important in the future as earlier digits now.

@Vladimir:

It is an open question whether our values and our lives will behave more like you have described or not.

For a lot of people, the desire to conform and not to be too weird by current human standards might make them converge over time. These people will live in highly customized utopias that suit their political and moral views, e.g. Christians in a mini-world where everyone has had their mind altered so that they can't doubt God, can't commit any sin, etc. E.g. ordinary modern semi-hedonists who live in something like the Culture. (Like pi as a number we use for engineering: the digits after the 100th convey almost no new information)

For others, boredom and curiosity will push them out into new territory. But the nature of both of these emotions is to incorporate environmental noise into one's prefs. They'll explore new forms of existence, new bodies, new emotions, etc, which will make them recursively weirder. These people will behave like the co-ordinates of a chaotic system in phase space: they will display very very high sensitivity to initial conditions, chance events "oh look, a flock of birds. I wonder what it would be like to exist as a swarm intelligence. I know, I'll try it".

The only group of people who I can see behaving the way you want are scientists. We have an abstract desire to understand how the world works. We will alter ourselves to become more intelligent in order to do so, and we have no idea what we will discover along the way. We are surely in for surprises as big as the discovery of evolution and quantum mechanics. Each new level of intelligence and discovery will be truly new, but hopefully the nature of truth is an abstract universal invariant that doesn't depend upon the details of the path you take to get to it.

In essence, scientists are the only ones for whom long term optimization of our world has the kind of unbounded value that singularitarians want. Ordinary people will only get a limited amount of value out of a positive singularity. Thus their apathy about it is understandable.

Thus their apathy about it is understandable.

... given that they don't think it is very likely, and they discount the future.

Note that I used "scientist" in a very general sense: anyone who really wants to understand reality for the sake of understanding it, anyone who has that natural curiosity.

I want to be able easily to come back to this. Would you create a category of "Sequences", and post it to that, so that there is a link on the side bar? I think there is at least another such sequence.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31