« The Right Thing | Main | Christmas Signaling »

December 17, 2008

Comments

as a preference utilitarian I dislike happiness studies. they're much too easy to use as justification for social engineering schemes.

You're solving the wrong problem. Did you really just call a body of experimental knowledge a political inconvenience?

Fun seems to *require* not fun in my experience with this particular body. Nevertheless, sign me up for the orgasmium (which appropriately came right after 'twice as hard')?

I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.
And the twenty lines are from the "spam" sketch. :)

I agree with the basic thing you're saying here, although, personally, I would want to right away start with some amount of mental improvements, a bit of debugging here, improvement there. Maybe even taking it a bit slow, but definitely not waiting to at least start. But that's just me. :)

I certainly agree that we don't need to instantly all, well, I believe the phrase you once used was "burn for the light," but I think I'd prefer for the option to at least be available.

Other than that, I could always spend a year contemplating the number 1, then a year contemplating the number 2... (sorry, it was the obvious reference that _HAD_ to be made here. :D)

yeah, I did. Only because I see political machinations as far more dangerous than the problems happiness studies solve.

'Fun' is just a word. Your use of it probably doesn't coincide with the standard meaning. The standard version of fun could likely be easily boxed in an orgasmium type state. You've chosen a reasonable sounding word to encapsulate the mechanisms of your own preference. Nietzsche would call that mechanism instinct, Crowley love. What it boils down to is that we all have a will, and that will is often counter to prior moralities and biological imperatives.

My own arbitrary word for preferable future states is 'interesting'. You'd have to be me for that to really mean anything though.

You're solving the wrong problem. Did you really just call a body of experimental knowledge a political inconvenience?
Oh, snap.
Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral ... to talk about human-level minds still running around the day after the Singularity.
We're offended by the inequity - why does that big hunk of meat get to use 2,000W plus 2000 square feet that it doesn't know what to do with, while the poor, hardworking, higher-social-value em gets 5W and one square inch? And by the failure to maximize social utility.

Fun is a cognitive phenomenon. Whatever your theory of fun is, I predict that more fun will be better than less fun, and the moral thing to do seems to be to pack in as much fun as you can before the heat death of the universe. Following that line of thought could lead to universe-tiling.

Suppose you develop a theory of fun/good/morality. What are arguments for not tiling the universe in a way that maximizes it? Are there any such arguments that don't rely on either diversity as an inherent good, or on the possibility that your theory is wrong?

Your post seems to say that fun and morality are the same. But we use the term "moral" only in cases when the moral thing to do isn't fun. I think morality = fun only if it's a collective fun. If that collective fun is also summed over hypothetical agents you could create, then we come back to moral outrage at humans.

The problem brings to mind the colonization of America. Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization that can support a population of about 100 times as many people, who think they are living more pleasurable and interesting lives, and hardly ever cut out their neighbors' hearts on the tops of temples to the sun god? Intellectuals today unanimously say "yes". But I don't think they've allowed themselves to actually consider the question.

What is the moral argument for not colonizing America?

Does that mean I could play a better version of World of Warcraft all day after the singularity? Even though it's a "waste of time"?

The transhumanist philosopher David Pearce is an advocate of what he calls the Hedonistic Imperative: The eudaimonic life is the one that is as pleasurable as possible. So even happiness attained through drugs is good? Yes, in fact: Pearce's motto is "Better Living Through Chemistry".

Well, it's definitely better than the alternative. We don't necessarily want to build Jupiter-sized blobs of orgasmium, but getting rid of misery would be a big step in the right direction. Pleasure and happiness aren't always good, but misery and pain are almost always bad. Getting rid of most misery seems like a necessary, but not sufficient, condition for Paradise.

I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.

You know, I wouldn't be surprised, considering that you can fit most of physics on a T-shirt. (Isn't God written in Lisp, though?)

Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization...?

False dichotomy.

Slightly tangential, but I think this needs addressing:

What is the moral argument for not colonizing America?

Literally interpreted, that's a meaningless question. We can't change history by moral argument. What we can do is point to past deeds and say, "let's not do things like that anymore".

If European civilization circa 1500 had been morally advanced enough to say, "let's not trample upon the rights of other peoples", chances are they would already have been significantly more advanced in other ways too. Moral progress takes work, just like technological and intellectual progress. Indeed we should expect some correlation among these modes of progress, should we not? And isn't that largely what we find?

By critiquing the errors of the past, we may hope to speed up our own progress on all fronts. This is (or should be) the point of labeling the colonization of America (in the way it happened) as "wrong".

It's getting close to the time when you have to justify your bias against universe-tiling.

If you truly believe that happiness/fun/interest/love/utils is the measurement to maximize, and you believe in shutting up and multiplying, then converting all matter to orgasmium sounds right, as a first approximation. You'd want self-improving orgasmium, so it can choose to replace itself with something that can enjoy even more fully and efficiently, of course.

Heh, if I could believe in a limited creator-god, I'd be tempted to think humans might be seed orgasmium. Our job is to get better at it and fill the universe with ourselves.

I'm an atheist who likes singing Song of Hope in church. I'd like to be a wirehead (or enter Nozick's experience machine). I don't know of any reason to delay becoming a superintelligence unless being a wirehead is the alternative.

The Indians were in large part killed by disease introduced by English fishermen. That's why Plymouth was relatively depopulated when the Pilgrims arrived and the Mound-Building Civilization collapsed without ever coming into contact with Europeans.

komponisto, as a non-cognitivist I don't find the notion of moral "progress" to be meaningful, and I'd like to hear your argument for why we should expect some sort of empirical correlation between it and, say, technological advancement (which gives the overwhelming power that in turn makes genocide possible).

"...if you would prefer not to become orgasmium, then why should you?"

I'd prefer not to become orgasmium, because I value my consciousness and humanity, my ability to think, decide, and interact. However, it's unclear to me what exactly preference is, other than the traversal of pathways we've constructed, whether we're aware of them or not, leading to pleasure, or away from pain. To drastically oversimplify, those values exist in me as a result of beliefs I've constructed, linking the lack of those things to an identity that I don't want, which in turn is eventually linked to an emotional state of sadness and loss that I'd like to avoid. There's also probably a link to a certain identity that I do want, which leads to a certain sense of pride and rightness, which leads me to a positive emotional state.

Eliezer, you said there was nothing higher to override your preference to increase intelligence gradually. But what about the preferences that led you to that one? What was it about the gradual increase of intelligence, and your beliefs about what that means, that compelled you to prefer it? Isn't that deeper motivation closer to your actual volition? How far down can we chase this? What is the terminal value of fun, if not orgasmium?

Or is "fun" in this context the pursuit specifically of those preferences that we're consciously aware of as goals?

TGGP, I'm afraid you've committed the moral analogue of replying to some truth claim with a statement of the form: "As a non-X-ist, I don't find the notion of truth to be meaningful".

By "moral progress" I simply mean the sense in which Western civilization is nicer today than it used to be. E.g. we don't keep slaves, burn live cats, etc. (If you have any doubts about whether such progress has occurred, you underestimate the nastiness of previous eras.) In particular, please note that I am not invoking any sort of fancy ontology, so let's not get derailed that way.

As for why we should expect moral progress to correlate with other kinds: well, maybe for arbitrary minds we shouldn't. But we humans keep trying to become both smarter and nicer, so it shouldn't be surprising that we succeed in both dimensions more and more over time.

Eliezer: Isn't your possible future self's disapproval one highly plausible reason for not spending lots of resources developing slowly?

Honestly, the long recognized awfulness of classic descriptions of heaven seems like counter-evidence to the thesis of "Stumbling on Happiness". I can't be confident regarding how good I am at knowing what would make me happy, so if the evidence that people in general are bad at knowing what will make them happy I should expect to be bad at it, but if I know that people in general are comically awful at knowing what will make them happy *compared to myself and to most people the judgment of whom I respect* then that fact basically screens off the standard empirical evidence of bad judgment as it applies to me.

Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?

Komponisto: "Moral progress takes work, just like technological and intellectual progress. Indeed we should expect some correlation among these modes of progress, should we not?"
Honestly, this seemed obvious before the 20th century when the Germans showed that it was possible to be plausibly the world's most scientifically advanced culture but morally backward. Our civilization still doesn't know what to make of that. We obviously see correlation, but also outliers.

"because you don't actually want to wake up in an incomprehensible world"

Is not it what all people do each morning anyway?

I don't know if this comment will get pass the political correctness criterion. May the webadmin have mercy on my soul :)

Eliezer, I am very much tempted to go into personal comments. I will do that on one premise only – that the title of this blog is “Overcoming Bias”. I would like to contribute to that purpose in good faith.

Having read some of Eliezer’s posts I was sure that he has been treated with a high dose of Orthodox Judaism. In this post he specifically points to that fact, thus confirming my analysis. To other readers: Orthodox Judaism requires that every action and thought be informed by divinely revealed law and ethics. It is one of the most sophisticated religious dogmas imaginable, and in its complexity and depth is comparable only to Buddhism.

Another important feature of Orthodox Judaism is its compartmentilization. This provides adherents of this religion with a very special belief system centered on indisputable sacredness of all things Jewish. It is so strong a system indeed that it sometimes leads to well-documented obsessive compulsive disorders.

Gladly, Eliezer has evaded the intricate chains of that belief system, it appears. My wild guess here is that he needs a substitute system. That is why he is so keen on Singularity. That is why he would like to have his Fun Theory – to counter his lack of security after he has left the warm house of Jahveh. So he is building a new, quite complex and evolutionary belief system that looks to me like a modern day חסידות.

I can only sympathize.

Michael, I take the point about outliers -- but claims like the one I made are inherently statistical in nature.

Furthermore, it is worth noting that (1) pre-WWI Germany would indeed have to be considered one of the more morally enlightened societies of the time; and (2) the Nazi regime ultimately proved no help to the cause of German scientific and cultural advancement -- and that's putting it way too mildly.

So perhaps this episode, rather than undermining the proposed correlation, merely illustrates the point that even advanced civilizations remain vulnerable to the occasional total disaster.

Doug S.: if it were 20 lines of lisp... it is'nt, see http://xkcd.com/224/ :)

Furthermore... it seems to me that a FAI which creates a nice world for us needs the whole human value system AND its coherent extrapolation. And knowing how complicated the human value system is, I'm not sure we can accomplish even the former task. So what about creating a "safety net" AI instead? Let's upload everyone who is dying or suffering too much, create advanced tools for us to use, but otherwise preserve everything until we come up with a better solution. This would fit into 20 lines, "be nice" wouldn't.

Were the people burning cats really trying to become non-cat-burners? Wasn't slavery viewed as divinely ordained for some time?

Regarding the Germans: winners write the history books. That is why the Soviet Union is not the anathema that Nazi Germany is to us today. If the Germans had won we would not consider them quite so evil. Technological advancement aids in winning wars.

V.G., good theory but I think it's ethnic rather than religious. Ayn Rand fell prey to the same failure mode with an agnostic upbringing. Anyway this is a kind of ad hominem called the Bulverism fallacy ("ah, I know why you'd say that"), not a substantive critique of Eliezer's views.

Substantively: Eliezer, I've seen indications that you want to change the utility function that guides your everyday actions (the "self-help" post). If you had the power to instantly and effortlessly modify your utility function, what kind of Eliezer would you converge to? (Remember each change is influenced by the resultant utility function after the previous change.) I believe (but can't prove) you would either self-destruct, or evolve into a creature the current you would hate. This is a condensed version of the FAI problem, without the AI part :-)

Vladimir, Kant once advised: "free yourself from the self-incurred tutelage of others".

I think that even if you consider Eliezer's Fun Theory as a somehow independent ethical construct (whatever that means), you still fail to accommodate for the lack of evidentialism in it. To me it appears as a mash-up of sporadic belief and wishful thinking, and definitely worth considering the ad hominem causality for it.

V.G., see my exchange with Eliezer about this in November: http://www.overcomingbias.com/2008/11/building-someth.html , search for "religion". I believe he has registered our opinion. Maybe it will prompt an overflow at some point, maybe not.

The discussion reminds me of Master of Orion. Anyone remember that game? I usually played as Psilons, a research-focused race, and by the endgame my research tree got maxed out. Nothing more to do with all those ultra-terraformed planets allocated to 100% research. Opponents still sit around but I can wipe the whole galaxy with a single ship at any moment. Wait for the opponents to catch up a little, stage some nice space battles... close the game window at some point. What if our universe is like that?

Eliezer:

Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral, and above all a failure of imagination, to talk about human-level minds still running around the day after the Singularity.
It won't be any of the ghastly things sometimes professed by the enthusiastic, but I think you should expect a creative surprise, and antipredict specific abstractions not obviously relevant to the whole of morality, such as gradual change in intelligence.

"Wait for the opponents to catch up a little, stage some nice space battles... close the game window at some point. What if our universe is like that?"

Wow, what a nice elegant Fermi paradox solution:)

Michael Vassar: "Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?"

Huh? No need. Why would you think I'm unaware of that?

I notice that several people replied to my question, Why not colonize America?; yet no one addressed it. I think they fail to see the strength of the analogy. Humans use many more resources than ems or AIs. If you take the resources from the humans and give them to the AI, you will at some point be able to support 100 times as many "equivalent", equally happy people. Make an argument for not doing that. And don't, as komponisto did, just say that it's the right thing to do.

Everybody says that not taking the land from the Native Americans would have been the right thing to do; but nobody wants to give it back.

An argument against universe-tiling would also be welcome.

TGGP, I'm not going to argue the point that there has been moral progress. It isn't the topic of this post.

Phil Goetz:

Everybody says that not taking the land from the Native Americans would have been the right thing to do; but nobody wants to give it back.

The whole point of my original comment was to refute this very inference. Arguing that taking land from the Native Americans was wrong is not the same as arguing that it should be "given back" now (whatever that would mean). Nor is it the same as wishing we lived in a world where it never happened.

What it means is wishing we lived in a world where the Europeans had our moral values -- and thus also in all probability our science and technology -- centuries ago. Before committing misdeeds against Native Americans.

Also, an argument that the actual colonization of America was "wrong" is not the same as an argument that America should never have been turned into a civilization. Surely there are ways to accomplish this without imposing so much disutility on the existing inhabitants*. Likewise for creating nice worlds with ems and AIs.

*There lies the implicit moral principle, in case you didn't notice.

I can't believe the discussion has got this far and no-one has mentioned The Land of Infinite Fun.

Yes, thank you, I was expecting someone to mention the Culture. I'll discuss it explicitly at some point.

komponisto, we can leave aside the question of whether moral progress is possible or actual and focus on why we should expect it to be associated with technological progress. We can easily see that in the middle ages people were trying to create tougher armor and more powerful weaponry. Ethically, they seem to strive to be more obedient Christians. That includes setting as a goal things that many of us today consider IMMORAL. Rather than hoping for progress along that axis, many instead thought that mankind was Fallen from an earlier golden age and if anything sought to turn the clock back (that is how the early Protestants and Puritans viewed themselves). It was never the case that anybody simply made moral discoveries that were simply proven to all who would listen, as in Eliezer's silly example of At'gra'len'ley. It was often the case that two sides considered each other immoral and one of them outcompeted the other militarily and shut up its propagandists. For what reason should we think it most likely that the victor actually was more moral?

So I appologize, Vladimir for bringing this up again, but i'm sort of a newcomer :)

However, notice that even in "Building Something Smarter" Eliezer does NOT deny his underlying need for a religious foundation (he simply declines to comment, which, among other things denotes his own dissatisfaction with that, well, bias).

How odd, I just finished reading The State of the Art yesterday. And even stranger, I thought 'Theory of Fun' while reading it. Also, nowhere near the first time that something I've been reading has come up here in a short timeframe. Need to spend less time on this blog!

Trying to anticipate the next few posts without reading:

Any Theory of Fun will have to focus on that elusive magical barrier that distinguishes what we do from what Orgasmium does. Why should it be that we place a different on earning fun from simply mainlining it? The intuitive answer is that 'fun' is the synthesis of endeavour and payoff. Fun is what our brains do when we are rewarded for effort. The more economical and elegant the effort we put in for higher rewards the better. It's more fun to play Guitar Hero when you're good at it, right?

But it can't just be about ratio of effort to reward, since orgasmium has an infinite ratio in this sense. So we want to put in a quantity of elegant, efficient effort, and get back a requisite reward. Still lots of taboo-able terms in there, but I'll think further on this.

V.G., since you seem to be an intelligent newcomer, I direct you to Is Humanism A Religion-Substitute? and suggest that you also browse the Religion tag.

Hmm. I wonder if this indicates that we may expect to see an exposition on the topic of Eliezer's preferred attempt at a solution to the wirehead problem. That would be fun.

We don't need to transform the universe into something we feel dutifully obligated to create, but isn't really much fun - in the same way that a Christian would feel dutifully obliged to enjoy heaven - or that some strange folk think that creating orgasmium is, logically, the rightest thing to do.

It doesn't seem impossible to me (only unlikely) that orgasmium is really the best thing there could be according to our idealized preferences, and far better than anything we could be transformed into while preserving personal identity, such that we would be dutifully obligated to create it, even though it's no fun for us or anything we identify with. I think this would stretch the point somewhat.

Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral, and above all a failure of imagination, to talk about human-level minds still running around the day after the Singularity.

For me, the very concept of "the day after the Singularity" is so far out - and off the rails - that I would hardly describe it as a failure of the imagination.

The idea seems more likely to be the result of an overactive, over-stimulated imagination - or perhaps the wild imaginings of some science fiction author.

In other words, "what is well-being?", in such terms that we can apply it to a completely alien situation. This is an important issue.

One red herring, I think, is this:

One major set of experimental results in hedonic psychology has to do with overestimating the impact of life events on happiness.

That could be read two ways. One way is the way that you and these psychologists are reading it. Another interpretation is that the subjects estimated the impact on their future well-being correctly, but after the events, they reported their happiness with respect to their new baseline, which became adjusted to their new situation. The second thing is effectively the derivative of the first. In this interpretation the subjects' mistake is confusing the two.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31