« Three Worlds Collide (0/8) | Main | War and/or Peace (2/8) »

January 30, 2009

Comments

Richard, I'd take the black holes of course.

As I expected. Much you (Eliezer) have written entails it, but it still gives me a shock because piling as much ordinary matter as possible into supermassive black holes is the most evil end I have been able to imagine. In contrast, suffering is merely subjective experience and consequently, according to my way of assigning value, unimportant.

Transforming ordinary matter into mass inside a black hole is a very potent means to create free energy, and I can imagine applying that free energy to ends that justify the means. But to put ordinary matter and radiation into black holes massive enough that the mass will never come back out as Hawking radiation as an end in itself -- horror!

Hollerith, you are now officially as weird as a Yudkowskian alien. If I ever write this species I'll name it after you.

Carl, I realize that I am postulating a sort of complicated and difficult adaptation, and then supposing that a comparatively simpler adaptation did not follow it.

And if I were writing my own story that way, and then criticizing you for writing your own story the other way, that would be unfair.

But Carl, this sort of thing does happen in real-world biology; there are adaptations that seem complicated to us, which fail to improve in ways that seem like they ought to have been relatively simpler even for natural selection. It happens. I am not yet ashamed of using this as a fictional premise.

And I'll also repeat the question for what you think would be a more probable more evil alien - bearing in mind that the Babyeaters aren't supposed to be completely evil, but anyway it's an interesting question.

Hollerith, you are now officially as weird as a Yudkowskian alien. If I ever write this species I'll name it after you.

Eliezer, to which of the following possibilities would you accord significant probability mass? (1) Richard Hollerith would change his stated preferences if he knew more and thought faster, for all reasonable meanings of "knew more and thought faster"; (2) There's a reasonable notion of extrapolation under which all normal humans would agree with a goal in the vicinity of Richard Hollerith's stated goal; (3) There exist relatively normal (non-terribly-mutated) current humans A and B, and reasonable notions of extrapolation X and Y, such that "A's preferences under extrapolation-notion X" and "B's preferences under extrapolation-notion Y" differ as radically as your and Richard Holleriths preferences appear to diverge.

Anna, talking about all reasonable meanings of "knew more and thought faster" is a very strong condition.

I would guess... call it 95% probability that a substantial fraction of reasonable construals of "knew more, thought faster" would deconvert the extrapolated Hollerith, and maybe 80% probability that most reasonable construals would so deconvert him. (2) gets negligible probability mass (if Hollerith got to a consistent place, he got there by an unusual sequence of adopted propositional moral beliefs with many degrees of freedom) and so (3) by subtraction.

I would greatly prefer that there be Babyeaters, or even to be a Babyeater myself, than the black hole scenario, or a paperclipper scenario. This strongly suggests that human morality is not as unified as Eliezer believes it is... like I've said before, he will horrified by the results of CEV.

Or the other possibility is just that I'm not human.

Let me clarify that what horrifies me is the loss of potential. Once our space-time continuum becomes a bunch of supermassive black holes, it remains that way till the end of time. It is the condition of maximum physical entropy (according to Penrose). Suffering on the other hand is impermanent. Ever had a really bad cold or flu? One day you wake up and it is gone and the future is just as bright as it would have been if the cold had never been.

And pulling numbers (80%, 95%) out of the air on this question is absurd.

Unknown, how certain are you that you would retain that preference if you "knew more, thought faster"? How certain are you that Eliezer would retain the opposite preference and that we are looking at real divergence? I have little faith in my initial impressions concerning Babyeaters vs. black holes; it's hard for me to understand the Babyeater suffering, or the richness of their lives vs. of black holes, as more than a statistic.

Eliezer, regarding (2), it seems plausible to me (I'd assign perhaps 10% probability mass) that if there is a well-formed goal that with the non-arbitrariness property that both Hollerith and Roko seem partly to be after, there is a reasonable notion of extrapolation (though probably a minority of such notions) under which 95% of humans would converge to that goal. Yes, Hollerith got there by a low-probability path; but the non-arbitrariness he is (sort of) aiming for, if realizable, suggests his aim could be gotten to by other paths as well. And there are variables in one's choice of "reasonable" notions of extrapolation that could be chosen to make non-arbitrariness more plausible. For example, one could give more weight to less arbitrary preferences (e.g., to whatever human tendencies lead us to appreciate Go or the integers or other parts of mathematics), or to types of value-shifts that to make our values less arbitrary (e.g., to preferences for values with deep coherence, or to preferences similar to a revulsion for lost purposes), or one could include father-back physical processes (e.g., biological evolution, gamma rays) as part of the "person" one is extrapolating.
[I realize the above claim differs from my original (2).] Do you disagree?

Richard, I don't see why pulling numbers out of the air is absurd. We're all taking action in the face of uncertainty. If we put numbers on our uncertainty, we give others more opportunity to point out problems in our models so we can learn (e.g., it's easier to notice if we're assigning too-high probabilities to conjunctions and so having probabilities that sum to more than 1).

I would greatly prefer that there be Babyeaters, or even to be a Babyeater myself, than the black hole scenario, or a paperclipper scenario.

Seems to me it depends on the parameter values.

Can a preference against arbitrariness ever be stable? Non-arbitrariness seems like a pretty arbitrary thing to care about.

Instead of describing my normative reasoning as guided by the criterion of non-arbitrariness, I prefer to describe it as guided by the criterion of minimizing or pessimizing algorithmic complexity. And that is a reply to steven's question right above: there is nothing unstable or logically inconsistent about my criterion for the same reason that there is nothing unstable about Occam's Razor.

Roko BTW had a conversion experience and now praises CEV and the Fun Theory sequence.

Anna, it takes very little effort to rattle off a numerical probability -- and then most readers take away an impression (usually false) of precision of thought.

At the start of Causality Judea Pearl explains why humans (should and usually do) use "causal" concepts rather than "statistical" ones. Although I do not recall whether he comes right out and says it, I definitely took away from Pearl the heuristic that stating your probability about some question is basically useless unless you also state the calculation that led to the number. I do recall that stating a number is clearly what Pearl defines as a statistical statement rather than a causal statement. What you should usually do instead of stating a probability estimate is to share with your readers the parts of your causal graph that most directly impinges on the question under discussion.

So, unless Eliezer goes on to list one or more factors that he believes would cause a human to convert to or convert away from my system of valuing things (namely, goal system zero or GSZ) or one or more factors that he believes would tend to prevents other factors from causing a conversion to or away from GSZ, I am going to go on believing that Eliezer has probably not reflected enough on the question for his numbers to be worth anything and that he is just blowing me off.

In summary, I tend to think that most uses of numerical probabilities on these pages have been useless. On this particular question I am particularly sceptical because Eliezer has exhibited signs (which I am prepared to describe if asked) that he has not reflected enough on goal system zero to understand it well enough to make any numerical probability estimate about it.

I am busy with an urgency today, so I might take 24 h to reply to replies to this.

Eliezer, if I understand you correctly, you would prefer a universe tiled with paperclips to one containing both a human civilization and a babyeating one. Let us say the babyeating captain shares your preference, and you and he have common knowledge of both these preferences.

Would you now press a button exterminating humanity?

I've not read this all the way through yet, but I want to add that space travel would seem a great deal more appealing were there Mistress of Fandom positions available.

I should think it obvious that if there are Masters of Fandom, there are Mistresses of Fandom.

Though Google turns up only 4 hits for "Secret Mistress of Fandom", which may imply that "Secret Master of Fandom" is considered a gender-free term.

The anthropological theory of Rene Girard suggests that our culture (and religion or rather religion and culture) has roots in organizing human groups for a proper lynch of a chosen individual. This shocking revelation is not that far from the fiction you created here - I wonder if it was the inspiration (but then it would be in conflict with your usual anti-Christianity stance).

You lost me

Greg Gurevich

Eleizer wasn't the first to think of this sort of thing:
http://en.wikipedia.org/wiki/A_Modest_Proposal

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31