« Open Thread | Main | A World Without Lies? »

January 02, 2009

Comments

Going into the details of Fun Theory helps you see that eudaimonia is actually complicated - that there are a lot of properties necessary for a mind to lead a worthwhile existence. Which helps you appreciate just how worthless a galaxy would end up looking (with extremely high probability) if it was optimized by something with a utility function rolled up at random.

Something with a utility function "rolled at random" typically does not "optimise the universe". Rather it dies out. Of those agents with utility functions that do actually spread themselves throughout the universe, it is not remotely obvious that most of them are "worthless" or "uninteresting" - unless you choose to define the term "worth" so that this is true, for some reason.

Indeed, rather the opposite - since such agents would construct galactic-scale civilisations, they would probably be highly interesting and valuable instances of living systems in the universal community.

Complex challenges? Novelty? Individualism? Self-awareness? Experienced happiness? A paperclip maximizer cares not about these things.

Sure it would: as proximate goals. Animals are expected gene-fitness maximisers. Expected gene-fitness is not somehow intrinsically more humane than expected paperclip number. Both have about the same chance of leading to the things you mentioned being proximate goals.

Novelty-seeking and self-awareness are things you get out of any sufficiently-powerful optimisation process - just as they all develop fusion, space travel, nanotechnology - and so on.

Complex challenges? Novelty? Individualism? Self-awareness? Experienced happiness? A paperclip maximizer cares not about these things.
But advanced evolved organisms probably will.

The paper-clipper is a straw man that is only relevant if some well-meaning person tries to replace evolution with their own optimization or control system. (It may also be relevant in the case of a singleton; but it would be non-trivial to demonstrate that.)

All of Tim Tyler's points have been addressed in previous posts. Likewise the idea that evolution would have more shaping influence than a simple binary filter on utility functions. Don't particularly feel like going over these points again; other commenters are welcome to do so.

A random utility function will do fine, iff the agent has perfect knowledge.

Imagine, if you will a stabber, something that wants to turn the world into things that have been stabbed. If it knows that stabbing itself will kill itself, it will know to stab itself last. If it doesn't know know that stabbing itself will lead to it no longer being able to stab things, then it may not do well in actually achieving its stabbing goal by stabbing itself too early.

Well, that is so vague as to hardly be worth the trouble of responding to - but I will say that I do hope you were not thinking of referring me here.

However, I should perhaps add that I overspoke. I did not literally mean "any sufficiently-powerful optimisation process". Only that such things are natural tendencies - that tend to be produced unless you actively wire things into the utility function to prevent their manifestation.

All of Tim Tyler's points have been addressed in previous posts. Likewise the idea that evolution would have more shaping influence than a simple binary filter on utility functions. Don't particularly feel like going over these points again; other commenters are welcome to do so.
Or perhaps someone else will at least explain what "having more shaping influence than a simple binary filter on utility functions" means. It sounds like it's supposed to mean that all evolution can do is eliminate some utility functions. If that's what it means, I don't see how it's relevant.

My guess is that it's a representation of my position on sexual selection and cultural evolution. I may still be banned from discussing this subject - and anyway, it seems off-topic on this thread, so I won't go into details.

If this hypothesis about the comment is correct, the main link that I can see would be: things that Eliezer and Tim disagree about.

The society of Brave New World actually seemed like quite an improvement to me.

"John C. Wright, who wrote the heavily transhumanist The Golden Age, had some kind of temporal lobe epileptic fit and became a Christian. There's a once-helpful soul, now lost to us."

this seems needlessly harsh. as you've pointed out in the past, the world's biggest idiot/liar saying the sun is shining, does not necesarily mean its dark out. the fictional evidence fallacy notwithstanding, if Mr. Wright's novels have useful things to say about transhumanism or the future in general, they should be apreciated for that. the fact the author is born-again shouldnt mean we throw his work on the bonfire.

TGGP,

The Brave New World was exceedingly stable and not improving. Our current society has some chance of becoming much better.

My own complaints regarding the Brave New World consist mainly of noting that Huxley's dystopia specialized in making people fit the needs of society. And if meant whittling down a square peg so it would fit into a round hole, so be it.

Embryos were intentionally damaged (primarily through exposure to alcohol) so that they would be unlikely to have capabilities beyond what society needed them to.

This is completely incompatible with my beliefs about the necessity of self-regulating feedback loops, and developing order from the bottom upwards.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31