« Moral uncertainty - towards a solution? | Main | Disagreeing About Doubt »

January 01, 2009

Comments

One good reason for the doctrine of stare decisis is that if judges know that their decision will bind future judges, they have an incentive to develop good rules, rather than just rules that favor a party to a particular case who may be sympathetic. If a good person driving negligently runs into someone loathsome who was not negligent at the time, rule-of-law notions require that the good person pay. It's very hard for some people to accept that; stare decisis encourages judges to do it. Unfortunately, stare decisis in the US, and especially in the Supreme Court, is pretty much dead.

I think this idea somewhat resembles what I see as the best reason for tenure for academics: it forces those who decide whether to keep someone on to look at the merits more carefully than they might if the issue were only "shall we keep this person (whom we like, and who has cute children) on the payroll for another year even though he hasn't written anything very good." Academics not on the tenure track seem to have even more job security than those who have to go through tenure review.

"So one possible way of helping - which may or may not be the best way of helping - would be the gift of a world that works on improved rules, where the rules are stable and understandable enough that people can manipulate them and optimize their own futures together."

For some reason, I'm reminded of Dungeons & Dragons, World of Warcraft, and other games...

Wouldn't you have to simplify the environment enough to make us all better optimizers than the FAI? Otherwise, we won't feel like we are struggling because the FAI is still the determiner of our actions.

You're making your utility function path-dependent on the detailed cognition of the Friendly AI trying to help you!

Wouldn't it be a lot clearer to say that it's dependent on, not the FAI's algorithm, but the FAI's actions in the counterfactual cases where you worked more or less hard?

The second one's argument seems consistent with one-boxing, not two-boxing.

Better still, on whether the difference between the ultimate outcomes in those counterfactual cases is commensurate with the difference in my actions.

It's interesting - raises a question of definition of counterfactual truth to a new level. The problem is that determining counterfactual truth is its own game, you can't do that just by taking reality, changing it, and running it forward. You need to rebuild reality back from the combination of actual reality and the concept of reality existing in a mind. Counterfactuals of present set the past as well as the future, which makes facts inconsistent. Whose mind should the concepts of reality and of counterfactual change be taken from, how should their weight be evaluated against facts in actual reality?

It seems that singleton needs to optimize all of the counterfactual timelines evaluated according to cognitive algorithms running in people's minds (with a nontrivial variety of counterfactual outcomes). This is also a way the strength of external help could be determined by the strength that people have in themselves.

Hrm... If you're trying to optimize the external environment relative to present day humans, rather than what we may become, I'm not sure that will work.

What I mean is this: the types of improved "basic rules" we want are in a large part complicated criteria over "surface abstractions", and lack lower level simplicity. In other words, the rules may end up being sufficiently complex that they effectively require intelligence.

Given that, if we _DON'T_ make the interface in some sense personlike, we might end up with the horror of living in a world that's effectively controlled by an alien mind, albeit one that's a bit more friendly to us, for its own reasons. Sort of living in a "buddy cthulu" world, if you take my point.

You want to improve the basic rules, but would the improvements, taken as a whole, be sufficiently simple that we, as mostly (mentally) unmodified humans be able to as easily take those rules into account and optimize in that environment the way we do with, say, gravity, EM, etc?

If we want it to be intuitive and predictable, at least at the point where we're still cognitively more or less the same as we are now, it might be better for it to at least seem like a person, since we've got all sorts of wiring in us that makes it easier for us to reason about people.

I understand why we may not want it to be an actual person, or to even seem like one. But let's not go all happy death spiral on this. I think there may be a possible downside to keeping it too unpersonlike.

As for the thing about optimizing external environment before people's minds, and tricky issues there. I simply, when thinking about that sort of thing, start with what kinds of changes I'd want to make in myself, given the opportunity (and a framework/knowledge/etc that helps me make sure the results would be what I _really_ wanted, rather than basically slapping myself with a monkey's paw or whatever.)

Judges go through pretty complicated cognitive algorithms in an absolute sense to make their decisions, but since we can predict them by running similar cognitive algorithms ourselves, the rules look simple - simpler than, say, Maxwell's Equations which have much lower Kolmogorov complexity in an absolute sense. So this is the sense of "predictability" that we're concerned with, but it's noteworthy that a world containing meddling gods - in the sense of their being smarter than human - is less predictable on even this dimension.

Oh, and I should have added earlier that modern legal systems score a nearly complete FAIL on this attribute of Fun Theory - no one human mind can even know all the rules any more, let alone optimize for them. There should be some Constitutional rule to the effect that the complete sum of the Law must be readable by one human in one month with 8 hours of sleep every night and regular bathroom breaks.

Yes, that's kind of my point: a "meddling god" of the classic "engaged in behavior that at least looked like it arose from human motivations" is something that a human can at least reasonably easily understand.

But rules arising from an alien "mind", rules that aren't simple either on a fundamental level or simple in a "simple relative to us" sense is something very different, not looking to us at all like a human judge making decisions.

Or am I completely and utterly missing the point here? (Don't misunderstand. I'm not saying that it is absolutely undesirable for things, at least initially, to work out as you suggest. But it does seem to me that there'd be a bit of an "understandability" cost, at least initially.)

I think you are missing the point; the idea is that the rules are comprehensible to humans even if the process that produced them is not. As long as you can haircut the causal process at the output and end up with something humanly comprehensible, you're fine. And anything that understands humans is quite capable of working with "human comprehensibility" as a desideratum.

Seconding Peter -- the post should say "one boxing", right?

Yeah, I was thinking "take box two" instead of "take two boxes" for some odd reason. Fixed.

Eliezer: Ah, okay, fair enough then.

I rather like the old (Icelandic?) custom of reciting the whole law out loud before opening a legislative session.

Do the humans know that the Friendly AI exists?

From my own motivation, if I knew that the rules had been made easier than independent life, I would lack all motivation to work. Would the FAI allow me to kill myself, or harm others? If not, then why not provide a Culture-like existence?

I would want to be able to drop out of the game, now and then, have a rest in an easier habitat. Humans can Despair. If the game is too painful, then they will.

A good parent will bring a child on, giving challenges which are just challenging enough to be interesting, without being too challenging and guaranteeing failure. If the FAI is always going to be more superior to any individual than any parent can be, could one opt to be challenged like that, directly by the FAI, to reach ones greatest potential?

What I want are fundamental choices, not choices within a scheme the FAI dreams up.

The future is still strongly counterfactually dependent on your actions: if you pursue wealth yourself, the AI will give you a pittance, and you go on to earn riches. If you choose to do nothing, the AI gives you a fortune, and you go on in idleness.

If your preference function trivializes the method by which you became wealthy, I have difficulty believing that it cares so acutely about the method by which the AI chose to give you some amount of money.

I find the parallel with what we want from government help kind-of interesting. Because I'm about 99% certain that I'd rather have fixed rules about how people get help (if you're unemployed, you get $X per week for N weeks maximum; if you're seriously poor, you qualify for $Y per week under qualifying conditions Z, etc.) than have some government employee deciding, on a per-case basis, how much I deserved, or (worse) trying to improve me by deciding whether I should be given $X per week, or whether that might just encourage me to laze around the house for too long.

The parallel isn't perfect--bureaucracies, like markets and legal systems, end up being more like some kind of idiot-savant AI, than like some near-omniscent one. But I think there is a parallel there--we'd probably mostly prefer consistent, understandable rules to our safety nets or whatever, rather than some well-meaning powerful person trying to shape us for our own good.

F.A. Hayek rather beat you to the whole argument for an isonomic and predictable legal environment :)

It's really quite simple: the people who designed and maintain the legal system faced a choice. Is it better for the system to be consistent but endlessly repeat its mistakes, or inconsistent but error-correcting?

They preferred it to be predictable.

And that is why it is absurd to call it a "justice system". It's not concerned with justice.

This post has got me thinking about my after-froze/after-upload career path. Hmm. Great! I think I've now found 3. So now when I retire, I know what to pursue to improve my odds of adapting successfully later.

EY: The desire not to be optimized too hard by an outside agent is one of the structurally nontrivial aspects of human morality.

The vast majority of optimization-capable agents encountered by humans during their evolutionary history were selfish entities, squeezing their futures into their preferred regions. Given enough evolutionary time, any mutant humans who didn't resist outside manipulation would end up 'optimized' to serve as slave labor in favor of the 'optimizers'.


EY: would be the gift of a world that works on improved rules

Yes, just plug the most important holes (accidental death, unwanted suffering, illness, justice, asteroids, etc.), and leave people have fun.

Eliezer,

Are you saying that one's brain state can be identical in two different scenarios but that you are having a different amount of fun in each? If so, I'm not sure you are talking about what most people call fun (ie a property of your experiences). If not, then what quantity are you talking about in this post where you have less of it if certain counterfactuals are true?

Toby Ord: "Fun" in the sense of "Fun Theory" is about eudaimonia and value, so to me it seems quite fair to say that you can be in an identical brain-state but be having different amounts of Fun, depending on whether the girl you're in love with is a real person or a nonsentient puppet. This is a moral theory about what should be fun, not an empirical theory of a certain category of human brain states. If you want to study the latter you go off and do the neurology of happiness, but if that's your moral theory of value then it implies simple wireheading.

Should "Fun" then be consistently capitalized as a term of art? Currently I think we have "Friendly AI theory" (captial-F, lowercase-t) and "Friendliness," but "Fun Theory" (capital-F capital-T) but "fun."

OK. That makes more sense then. I'm not sure why you call it 'Fun Theory' though. It sounds like you intend it to be a theory of 'the good life', but a non-hedonistic one. Strangely it is one where people having 'fun' in the ordinary sense is not what matters, despite the name of the theory.

This is a moral theory about what should be fun

I don't think that can be right. You are not saying that there is a moral imperative for certain things to be fun, or to not be fun, as that doesn't really make sense (at least I can't make sense of it). You are instead saying that certain conditions are bad, even when the person is having fun (in the ordinary sense). Maybe you are saying that what is good for someone mostly maps to their fun, but with several key exceptions (which the theory then lists).

In any event, I agree with Z.M. Davis that you should capitalize your 'Fun' when you are using it in a technical sense, and explaining the sense in more detail or using a different word altogether might also help.

You are not saying that there is a moral imperative for certain things to be fun, or to not be fun, as that doesn't really make sense (at least I can't make sense of it).

But that's exactly what I'm saying. When humanity becomes able to modify itself, what things should be fun, and will we ever run out of fun thus construed? This is the subject matter of Fun Theory, which ultimately determines the Fate of the Universe. For if all goes well, the question "What is fun?" shall determine the shape and pattern of a billion galaxies.

@Toby Ord

It seems to me that Eli is interested in the known branch of anthropology known as ludology, or game studies. The first ludologist I ever knew of was the eminent philosopher Sir Michael Dummett of Oxford, an amazing, diverse guy. The history of playing cards is one of his specialties, and he has written 2 books on them.

Games can be silly (apparently the only truly universal game is peekaboo - why is that?) or profund (go). They of course are intriguing for what they say about culture, history, innate human ethics, their use of language, their unique sense of time, how they bring diverse people together or start riots, what they "mean," what happens to people who play them, what the heck is play anyway, why do we enjoy them? Why are primates fascinated by them?

This is such a British study - "fair play" is such a crucial British cultural idea! But now you can meet ludologists who work for video game companies - these are usually anthropologists who study human-machine interactions by hanging out with users. My college pal Anne McClard used to do this for Apple and now does this freelance.

In the future, if Eli is both lucky & right, we may have the ethical and moral problem of having nothing to do but play games. Those who might be against Eli's plan might argue this is a reduction of humanity to infantilism, but it could actually reinforce the most beautiful and important human behaviors.

So yes, Eli is interested in ludology, in ludic ethics, and ludic morality.

For if all goes well, the question "What is fun?" shall determine the shape and pattern of a billion galaxies.

I object to most of the things Eliezer wants for the far future, but of all the sentences he has written lately, that is probably the one I object to most unequivocally. A billion galaxies devoted to fun does not leave Earth-originating intelligence at lot to devote to things that might be actually important.

That is my dyspeptic two cents.

Not wanting to be in a rotten mood keeps me from closely reading this series on fun and the earlier series on sentience or personhood, but I have detected no indication of how Eliezer would resolve a conflict between the terminal values he is describing. If for example, he learned that the will of the people, oops, I mean, the collective volition, oops, I mean, the coherent extrapolated volition does not want fun, would he reject the coherent extrapolated volition or would he resign himself to a future of severely submaximal quantities of fun?

A billion galaxies devoted to fun does not leave Earth-originating intelligence at lot to devote to things that might be actually important.

Like WHAT, for the love of Belldandy?

Show me something more important than fun!

I think you've heard this one before: IMHO it has to do with the state in which reality "ends up" and has nothing to do with the subjective experiences of the intelligent agents in the reality. In my view, the greatest evil is the squandering of potential, and devoting the billion galaxies to fun is squandering the galaxies just as much as devoting them to experiments in pain and abasement is. In my view there is no important difference between the two. There would be -- or rather there might be -- an important difference if the fun produced by the billion galaxies is more useful than the pain and abasement -- more useful, that is, for something other than having subjective experiences. But that possibility is very unlikely.

In the present day, a human having fun is probably more useful toward the kinds of ends I expect to be important than a human in pain. Actually the causal relationship between subject human experience and human effectiveness or human usefulness is poorly understood (by me) and probably quite complicated.

After the engineered explosion of engineeered intelligence, the humans are obsolete, and what replaces them is sufficiently different from the humans that my previous paragraph is irrelevant. In my view, there is no need to care whether or what subjective experiences the engineered intelligences will have.

What subjective experiences the humans will have is relevant only because the information helps us predict and control the effectiveness and the usefulness of the humans. We will have proofs of the correctness of the source code for the engineered intelligent agents, so there is no need to inquire about their subjective experiences.

Richard: You didn't actually answer the question. You explained(erm, sort of) why you think Fun isn't important, but you haven't said what you think *is*. All you've done is use the word "important" as though it answered the question: "In the present day, a human having fun is probably more useful toward the kinds of ends I expect to be important than a human in pain.". Great: what kinds of ends do you expect to be important?

-Robin

Robin, my most complete description of this system of valuing things consists of this followed by this. Someone else wrote 4 books about it, the best one of which is this.

You *still* don't answer the question. All those links are is an argument that if all times are treated as equal, actions now will be the same regardless of the final goal. You don't say what goals you want to move to.

As for that book... Wow.

First sentences of Chapter 8 of that book: We are going whence we came. We are evolving toward the Moral Society, Teilhard's Point Omega, Spinoza's Intellectual Love of God, the Judaeo-Christian concept of union with God. Each of us is a holographic reflection of the creativity of God.

I don't even know where to start, on either topic, so I won't.

-Robin

OK, since this is a rationalist scientist community, I should have warned you about the eccentric scientific opinions in Garcia's book. The most valuable thing about Garcia is that he spent 30 years communicating with whoever seemed sincere about the ethical system that currently has my loyalty, so he has dozens of little tricks and insights into how actual humans tend to go wrong when thinking in this region of normative belief space.

Whether an agent's goal is to maximize the number of novel experiences experienced by agents in the regions of space-time under its control or whether the agent's goal is to maximize the number of gold atom in the regions under its control, the agent's initial moves are going to be the same. Namely, your priorities are going to look some like the following. (Which item you concentrate on first is going to depend on your exact circumstances.

(1) ensure for yourself an adequate supply of things like electricity that you need to keep on functioning;

(2) get control over your own "intelligence" which probably means that if you do not yet know how reliably to re-write your own source code, you acquire that ability;

(3a) make a survey of any other optimizing processes in your vicinity;

(3b) try to determine their goals and the extent to which those goals clash with your own;

(3c) assess their ability to compete with you;

(3d) when possible, negotiate with them to avoid negative-sum mutual outcomes;

(4a) make sure that the model of reality that you started out with is accurate;

(4b) refine your model of reality to encompass more and more "distant" aspects of reality, e.g., what are the laws of physics in extreme gravity? are the laws of physics and the fundamental constants the same 10 billion light years away as they are here? -- and so on.

Because those things I just listed are necessary regardless of whether in the end you want there to be lots of gold atoms or lots of happy humans, those things have been called "universal instrumental values" or "common instrumental values".

The goal that currently has my loyalty is very simple: everyone should pursue those common instrumental values as an end in themselves. Specifically, everyone should do their best to maximize the ability of the space, time, matter and energy under their control (1) to assure itself ("it" being the space, time, matter, etc) a reliable supply of electricity and the other things it needs; (2) to get control over its own "intelligence"; and so on.

I might have mixed my statement or definition of that goal (which I call goal system zero) with arguments as to why that goal deserves the reader's loyalty, which might have confused you.

I know it is not completely impossible for someone to understand because Michael Vassar successfully stated goal system zero in his own words. (Vassar probably disagrees with the goal, but that is firm evidence that he understands it.)

Richard,

You missed (5): preserve your goals/utility function to ensure that the resources acquired serve your goals. Avoiding transformation into Goal System Zero is a nearly universal instrumental value (none of the rest are universal either).

Avoiding transformation into Goal System Zero is a nearly universal instrumental value

Do you claim that that is an argument against goal system zero? But, Carl, the same argument applies to CEV -- and almost every other goal system.

It strikes me as more likely that an agent's goal system will transform into goal system zero than it will transform into CEV. (But surely the probability of any change or transformation of terminal goal happening is extremely small in any well engineered general intelligence.)

Do you claim that that is an argument against goal system zero? If so, I guess you also believe that the fragility of the values to which Eliezer is loyal is a reason to be loyal to them. Do you? Why exactly?

I acknowledge that preserving fragile things usually has instrumental value, but if the fragile thing is a goal, I am not sure that that applies, and even if it does, I would need to be convinced that a thing's having instrumental value is evidence I should assign it intrinsic value.

Note that the fact that goal system zero has high instrumental utility is not IMHO a good reason to assign it intrinsic utility. I have not mentioned in this comment section what most convinces me to remain loyal to goal system zero; that is not what Robin Powell asked of me. (It just so happens that the shortest and quickest explanation I know of of goal system zero involves common instrumental values.)

I do not consider the mere fact that something is a common instrumental value (i.e. has instrumental utility toward a wide range of goals) to be a good argument for assigning that thing intrinsic value. I have not outlined the argument that keeps me loyal to goal system zero because that is not what Robin Powell asked of me. It just so happens that the quickest and shortest explanation of goal system zero with which I am familiar uses common instrumental values.

Avoiding transformation into Goal System Zero is a nearly universal instrumental value

I agree. Do you consider that an argument against goal system zero? But Carl, the same argument applies to CEV and almost every other goal system.

It strikes me as probably more likely for an agent's goal system to transform into goal system zero than to transform into CEV. (But in a well engineered general intelligence, any change or transformation of the system of terminal goals strikes me as extremely unlikely.) Do you consider that an argument against goal system zero?

My comment did not show up immediately, like it always has in the past, so I wrote it again. Oops!

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31