« Sunnyvale Meetup Saturday | Main | Who Likes What Movies »

January 21, 2009

Comments

Eliezer, didn't you say that humans weren't designed as optimizers? That we satisfice. The reaction you got is probably a reflection of that. The scenario ticks most of the boxes humans have, existence, self-determination, happiness and meaningful goals. The paper clipper scenario ticks none. It makes complete sense for a satisficer to pick it instead of annihilation. I would expect that some people would even be satisfied by a singularity scenario that kept death as long as it removed the chance of existential risk.

Oh please not boreana.
Many of us women vastly prefer marsterii, and I must assume including both would make Venus somewhat unstable and dusty.

""good enough" or an "amazing improvement""
Some people may blur those together, but logarithmic perception of rewards and narrow conscious aims explain a lot. Agelessness, invulnerability to violence, ideal mates, and a happy future once technology is re-established, to the limits of the AI's optimization capability (although I wonder if that means it has calculated we're likely to become wireheads the next time around, or otherwise create a happiness-inducer that indirectly bypasses some of the 107 rules) satisfy a lot of desires. Especially for immortality-obsessed transhumanists. And hedonists. Not to mention: singles.

"My suspicion is that people would turn around and criticize it - that what we're really seeing here is contrarianism."
Or perhaps your preferences are unusual, both because of values and because of time pondering the issue. This scenario has concrete rewards tickling the major concerns of most humans. Your serious application of Fun Theory would be further removed from today's issues: fear of death, lack of desirable mates, etc, and might attract criticism because of that.

"boreana"

This means "half Bolivian half Korean" according to urbandictionary. I bet I'm missing something.

Perhaps we should have a word ("mehtopia"?) for any future that's much better than our world but much worse than could be. I don't think the world in this story qualifies for that; I hate to be negative guy all the time but if you keep human nature the same and "set guards in the air that prohibit lethal violence, and any damage less than lethal, your body shall repair", they still may abuse one another a lot physically and emotionally. Also I'm not keen on having to do a space race against a whole planet full of regenerating vampires.

Remember, Elizer, that what we're comparing this life to when saying 'hmm, it's not that bad' is

1) Current life, averaged over the entire human species including the poor regions of Africa. Definitely an improvement over that.
2) The paperclipping of the world, which was even mostly avoided.

It's not a successful utopia, because it could be better; significantly better. It's not a failed one, because people are still alive and going to be pretty happy after an adjustment period.

Much of what that you've been building up in many of your posts, especially before this latest Fun Theory sequence is "we have to do this damn right or else we're all dead or worse". This is not worse than death, and in fact might even be better than our current condition; hence the disagreement to characterizing this as a horrible horrible outcome.

It seems like the people who are not happily married get a pretty good deal out of this, though? I'm not sure I understand how 90% of humanity ends up wishing death on the genie. Maybe 10% of humanity had a fulfilling relationship broken up, and 80% are just knee-jerk luddites.

This is what I think of as a "mildly unfriendly" outcome. People still end up happy, but before the change, they would not have wanted the outcome. One way for that to happen involves the AI forcibly changing value systems, so that everyone suddenly has an enthusiasm for whatever imperatives it wishes to impose. In this story, as I understand it, there isn't even alteration of values, just a situation constructed to induce the victory of one set of values (everything involved in the quest for a loved one) over another set of values (fidelity to the existing loved one), in a way which violates the protagonist's preferred hierarchy of values.

Okay, just to disclaim this clearly, I probably would press the button that instantly swaps us to this world - but that's because right now people are dying, and this world implies a longer time to work on FAI 2.0.

But the Wrinkled Genie scenario is not supposed to be probable or attainable - most programmers this stupid just kill you, I think.

"Mehtopia" seems like a good word for this kind of sub-Utopia. Steven's good at neologisms!

I should also note that I did do some further optimizing in my head of the verthandi - yes, they have different individual personalities, yes guys sometimes reject them and they move on, etcetera etcetera - but most of that background proved irrelevant to the story. I shouldn't really be saying this, because the reader has the right to read fiction any way they like - but please don't go assuming that I was conceptualizing the verthandi as uniform doormats.

Some guys probably would genuinely enjoy doormats, though, and so verthandi doormats will exist in their statistical distribution. To give the verthandi a feminist interpretation would quite miss the point. If there are verthandi feminists, their existence is predicated on the existence of men who are attracted to feminists, and I'm reasonably sure that's not what feminism is about.

If you google boreana you should get an idea of where that term comes from, same as verthandi.

It seems like the people who are not happily married get a pretty good deal out of this, though? I'm not sure I understand how 90% of humanity ends up wishing death on the genie.

Good point, Nominull - though even if you're not married, you can still have a mother. Maybe the Wrinkled Genie could just not tell the singles about the verthandi as yet - just that they'd been stripped of technology and sent Elsewhere - but that implies the Wrinkled Genie deliberately planning its own death (as opposed to just planning for its own death), and that wasn't what I had in mind.

90% also seems awfully high of a fail-safe limit. Why not 70%, 50% or even less? You could just change the number and that'd fix the issue.

I also tend to lean towards the "not half as bad" camp, though a bit of that is probably contrarianism. And I do know futures that'd rank higher in my preference ordering than this. Still, it's having a bit of a weirdtopia effect on me - not at all what I'd have imagined as an utopia at first, but strangely appealing when I think more of it... (haven't thought about it for long enough of a time to know if that change keeps up the more I think of it)

Eliezer:
I'd say most of the 'optimism' for this is because you've convinced us that much worse situations are much more likely.

Also, we're picking out the one big thing the AI did wrong that the story is about, and ignoring other things it did wrong. (leaving no technology, kidnapping, creation of likely to be enslaved sentients) I'm sure there's an already named bias for only looking at 'big' effects.

And we're probably discounting how much better it could have been. All we got was perfect partners, immortality, and one more planet than we had before. But we don't count the difference between singularity-utopia and #4-2 as a loss.

An excellent story, in the sense that it communicates the magnitude of the kinds of mistakes that can be made, even when one is wise and prudent (or imagines oneself so). I note with more than some amusement that people are busy in the comments adding stricture 108, 109, 110 - as if somehow just another layer or two, and everything would be great! (Leela: "The iceberg penetrated all 7000 hulls!" Fry: "When will humanity learn to make a ship with 7001 hulls!"

Nicely done.

If you google boreana you should get an idea of where that term comes from, same as verthandi.

Still need a little help. Top hits appear to be David Boreanaz, a plant in the Rue family, and a moth.

But if not - if this world indeed ranks lower in my preference ordering, just because I have better scenarios to compare it to - then what happens if I write the Successful Utopia story?

Try it and see! It would be interesting and constructive, and if people still disagree with your assessment, well then there will be something meaningful to argue about.

Great story!

This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.
Similarly, it's notable that the AI seems to use exactly the same interpretation of the word lie as Eliezer Yudkowsky: that's why it doesn't self-describe as an "Artificial Intelligence" until the verthandi uses the phrase.

... neither of those is unusual if you consider that the veary nearly wise fool was Eliezer Yudkowsky.

(Rule 76: "... except for me. I get my volcano base with catgirls.")

I am sorry.

I must not be a human being to not see any problem in this scenario. I can vaguely see that many humans would be troubled by this, but I wouldn't be. Maybe to me humanity is dead already, ambiguity intentional.

I welcome your little scary story as currently to me the world is hell.

"Men and women can make each other somewhat happy, but not most happy" said the genie/ AI.

What will make one individual "happy" will not work for the whole species. I would want the AI to interview me about my wants: I find Control makes me happier than anything, not having control bothers me. Control between fifty options which will benefit me would be good enough, I do not necessarily need to be able to choose the bad ones...er...

Being immortal and not being able to age, and being cured of any injury, sound pretty good to me. It is not just contrarianism that makes people praise this world.

Please do write your "actual shot at applied fun theory".

Science fiction fandom makes me happy. Tear it into two separate pieces, and the social network is seriously damaged.

Without going into details, I have some issues about romantic relationships-- it's conceivable that a boreana could make me happy (and I'm curious about what you imagine a boreana to be like), but I would consider that to be direct adjustment of my mind, or as nearly so as to not be different.

More generally, people tend to have friends and family members of the other sex. A twenty-year minimum separation is going to be rough, even if you've got "perfect" romantic partners.

If I were in charge of shaping utopia, I'd start with a gigantic survey of what people want, and then see how much of it can be harmonized. That would at least be a problem hard enough to be interesting for an AI.

If that's not feasible, I agree that some incremental approach is needed.

Alternatively, how about a mildly friendly AI that just protects us from hostile AIs and major threats to the existence of human race? I realize that the human race will be somewhat hard to define, but that's just as much of a problem for the "I just want to make you happy" AI.

"Top hits appear to be David Boreanaz,"

Eliezer is a Buffy fan.

Khannea: Eliezer himself said that he'd take that world over this one, if for no other reason than that world buys more time to work, since people aren't dying.

However, we can certainly see things that _could be better_... We can look at that world and say "eeeh, there're things we'd want different instead"

The whole "enforced breaking up of relationships" thing, for one thing, is a bit of a problem, for one thing.

Although having the girl of my dreams would certainly be nice, I'd soon be pissed off at the lack of all the STUFF that I like and have accumulated. No more getting together with buddies and playing Super Smash Bros (or other video games) for hours? No Internet to surf and discuss politics and such on? No more Magic: the Gathering?

Screw that!

Doug: "Although having the girl of my dreams would certainly be nice, I'd soon be pissed off at the lack of all the STUFF that I like and have accumulated. No more getting together with buddies and playing Super Smash Bros (or other video games) for hours? No Internet to surf and discuss politics and such on? No more Magic: the Gathering?

Screw that!"

You'd rather play "Magic: the gathering" than get laid? WTF?

Because I'm curious:

How much evidence, and what kind, would be necessary before suspicions of contrarianism are rejected in favor of the conclusion that the belief was wrong?

Surely this is a relevant question for a Bayesian.

Doug S,

Indeed. The AI wasn't paying attention if he thought bringing me to this place was going to make me happier. My stuff is part of who I am; without my stuff he's quite nearly killed me. Even moreso when 'stuff' includes wife and friends.

But then, he was raised by one person so there's no reason to think he wouldn't believe in wrong metaphysics of self.

Roko: Yes. Yes I would.

There are plenty of individual moments in which I would rather get laid than play Magic, but on balance, I find Magic to be a more worthwhile endeavor than I imagine casual sex to be. The feeling I got from this achievement was better and far longer lasting than the feelings I get from masturbation. Furthermore, you can't exactly spend every waking moment having sex, and "getting laid" is not exactly something that is completely impossible in the real world, either.

Also, even though I'm sure that simply interacting with the girl of my dreams in non-sexual ways would, indeed, be a great source of happiness in and of itself, I'd still be frustrated that we couldn't do all the things that I like to do together!

Ah, discussion of the joys of Magic: the Gathering on Overcoming Bias.

It's like all the good stuff converges in one place :)

In view of the Dunbar thing I wonder what people here see as a eudaimonically optimal population density. 6 billion people on Mars, if you allow for like 2/3 oceans and wilderness, means a population density of 100 per square kilometer, which sounds really really high for a cookie-gatherer civilization. It means if you live in groups of 100 you can just about see the neighbors in all directions.

Since people seem to be reading too much into the way the Wrinkled Genie talks, I'll note that I wrote this story in one night (that was the goal I set myself) and that the faster I write, the more all of my characters sound like me and the less they have distinctive personalities. Stories in which the character gets a genuine individual voice are a lot more work and require a lot more background visualization.

Steven, I didn't do that calculation. Well, first of all I guess that Mars doesn't end up as 2/3 ocean, and second, we'll take some mass off the heavier Venus and expand Mars to give it a larger surface area. That's fair.

Eliezer, you're cheating. Getting trapped makes this a dystopia. It would make almost anything a dystopia. Lazy!

Suppose a similar AI (built a little closer to Friendly) decided to introduce verthandi and the pro-female equivalent (I propose "ojisamas") into an otherwise unchanged earth. Can you argue that is an amputation of destiny? Per my thinking, all you've done is doubled the number of genders and much increased the number of sexual orientations, to the betterment of everyone. (What do you call a verthandi who prefers to love an ojisama?)

Angel: "Eliezer is a Buffy fan"

Wow, I hope they have chiropractors on Venus for all the Stoopy McBroodingtons lurking around like Angel. Every time I he popped up on Buffy I kept wanting to fix his posture.

Huh. I guess I just don't see Angel (the TV character, not the commenter) as the equivalent of the verthandi. (Also naming the idea after the actor instead of the character lead me somewhat astray.)

Sure this isn't a utopia for someone who wants to preserve "suboptimal" portions of his/her history because they hold some individual significance. But it seems a pretty darn good utopia for a pair of newly created beings. A sort of Garden of Eden scenario.

As for what to call the female equivalent of the "verthandi" - well, Edward Cullen of the recent Twilight series was intended by the author to be a blatant female wish fulfillment/idealized boyfriend character, although the stories and character rub an awful lot of people the wrong way.

Will Pearson: your arguments apply equally well to any planner. Planners have to consider the possible futures and pick the best one (using a form of predicate), and if you give them infinite horizons they may have trouble.

True, whenever you have a planner for a maximizer, it has to decide how to divide its resources between planning and actually executing a plan.

However, your wish needs a satisfier: it needs to find at least one solution that satisfies the predicate "I wouldn't regret it".

The maximizer problem has a "strong" version which translates to "give me the maximum possible in the universe", which is obviously a satisfier problem (i.e., find a solution that satisfies the predicate "is optimal", then implement it). But you can always reformulate these in a "weak" version: "find a way of creating benefit; then use x% resources to find better ways of maximizing benefit, and the rest to implement the best techniques at the moment", with 0 < x < 100 an arbitrary fraction. (Note that the "find better ways part" can change the fraction if it's sure it would improve the final result.)

So, if you just like paperclips and just want a lot of those, you can just run the weak version of the maximizer be done with it: you're certain to get a lot of something as long as it's possible.

But for satisfiability problems, you might just have picked problem that doesn't have a solution. Both "find a future I wouldn't regret" and "make the maximum number of paperclips possible in this Universe" are such satisfiability problems. (I don't know if these problems in particular have a "findable" solution, however, nor how to determine it. The point is that they might be, so it's possible to spend the lifetime of the Universe for nothing.)

The only idea of an equivalent "weak" reformulation would be to say "use X resources (this includes time) to try to find a solution". This doesn't seem as acceptable to me: you might still spend X resources and get zero results. (As opposed to the "weak" maximizer, where you still get something as long as it's possible.) But maybe that's just because I don't care about paperclips that much, I don't know.)

* * *

Now, if you absolutely want to satisfy a predicate, you just don't have any alternative to spending all your resources on that. OK. But are you sure that "no regrets" is an absolutely necessary condition on the future? Actually, are you sure enough of that that you'd be willing to give up everything for the unknown chance of getting it?

Reformulate to least regret after a certain time period, if you really want to worry about the resource usage of the genie.

There's almost a Gene Wolfe feel to the prose, which is, of course, a complement.

There's almost a Gene Wolfe feel to the prose, which is, of course, a complement.

I don't usually do the modesty thing, because it feels like handing a gift back to the person who tried to give it to you. But on this occasion - sir, I feel that you praise me way, way, way too highly.

SUPER STORY WOULD READ AGAIN

Eliezer, since you are rejecting the Wolfean praise, I will take the constructive criticism route. This is not your best writing, but you know that since you spent a night on it.

We have three thousand words here. The first thousand are disorientation and describing the room and its occupants. The second thousand is a block of exposition from the wrinkled figure. The third thousand is an expression of outrage and despair. Not a horrid structure, although you would want to trim the first and have the second be less of a barely interrupted monologue.

As a story, the dominant problem is that the characters are standing in a blank room being told what has already happened, and that "what" is mostly "I learned then changed things all at once." There have been stories that do "we are just in a room talking" well or badly; the better ones usually either make the "what happened" very active (essentially a frame story) or accept the recumbent position and make it entirely cerebral; the worse ones usually fall into a muddled in-between.

As a moral lesson, the fridge logic keeps hitting you in these comments, notably that this is a pure Pareto improvement for much of the species. Even as a failed utopia, you accept it as a better place from which to work on a real one. And 89.8% want to kill the AI? The next most common objection has been how this works outside heteronormativity, or for a broad range of sexual preferences. Enabling endless non-fatal torture is another winner for "how well did you think that through?" So it is not bad enough to fulfill its intent, its "catch" seems inadequately conceived, and there are other problems that make the whole scenario suspect.

My first thought of a specific way to better fulfill the story's goals would be to tell it from Helen's perspective, or at least put more focus on her and Lisa. You have many male comments of "hey, not bad." They are thinking of their own situations. They are not thinking of their wives and daughters being sexually serviced by boreana. The AI gets one line about this, but Stephen seems more worried about his fidelity than hers. With a substantially male audience, that is where you want to shove the dagger. Take it in the other direction by having the AI be helpful to Helen. While she does not want to accept her overwhelming attraction to her crafted partner, the AI wants her to make a clean break so she can be happier. It will gladly tell her about how Stephen's partner is more attractive to him than she could ever be, how long it will take for his affection to be alienated, and how rarely he will think about Helen after they have spent more time on different planets than they spent in the same house. Keep the sense of family separation by either making the child a son or noting that the daughter is somewhere on the planet, happier beyond her mother's control; in either case, note that s/he also woke up with a very attractive member of the opposite sex whose only purpose in life is to please him/her. This could be the point to note those male sexual enhancements, and monogamy is not what makes everyone happiest, so maybe Lisa wakes up with a few boreana.

And maybe this is just me, but the AI could seem a bit less like the Dungeonmaster from the old D&D cartoon.

The story has problems, and it's not clear how it's meant to be taken.

Way 1: we should believe the SAI, being a SAI, and so everyone will in fact be happier within a week. This creates cognitive dissonance, what with the scenario seeming flawed to us, and putting us in a position of rejecting a scenario that makes us happier.

Way 2: we should trust our reason, and evaluate the scenario on its own merits. This creates the cognitive dissonance of the SAI being really stupid. Yeah, being immortal and having a nice companion and good life support and protection is good, but it's a failed utopia because it's trivially improvable. The fridge logic is strong in this one, and much has been pointed out already: gays, opposite-sex friends, family. More specific than family: children. What happened to the five year olds in this scenario?

The AI was apparently programmed by a man who had no close female friends, no children, and was not close to his mother. Otherwise the idea that either catgirls or Belldandies should lead to a natural separation of the sexes would not occur. (Is the moral that such people should not be allowed to define gods? Duh.) If I had a catgirl/non-sentient sexbot, that would not make me spend less time with true female friends, or stop calling my mother (were she still alive.) Catgirl doesn't play Settler of Catan or D&D or talk about politics. A Belldandy might, in the sense that finding a perfect mate often leads to spending less time with friends, but it still needn't mean being happy with them being cut off, or being unreceptive to meeting new friends of either sex.

So yeah, it's a pretty bad utopia, defensible only in the "hey, not dying or physically starving" way. But it's implausibly bad, because it could be so much better by doing less work: immortalize people on Earth, angelnet Earth, give people the option of summoning an Idealized Companion. Your AI had to go to more effort for less result, and shouldn't have followed this path if it had any consultation with remotely normal people. (Where are the children?)

The story has problems, and it's not clear how it's meant to be taken.

Way 1: we should believe the SAI, being a SAI, and so everyone will in fact be happier within a week. This creates cognitive dissonance, what with the scenario seeming flawed to us, and putting us in a position of rejecting a scenario that makes us happier.

Way 2: we should trust our reason, and evaluate the scenario on its own merits. This creates the cognitive dissonance of the SAI being really stupid. Yeah, being immortal and having a nice companion and good life support and protection is good, but it's a failed utopia because it's trivially improvable. The fridge logic is strong in this one, and much has been pointed out already: gays, opposite-sex friends, family. More specific than family: children. What happened to the five year olds in this scenario?

The AI was apparently programmed by a man who had no close female friends, no children, and was not close to his mother. Otherwise the idea that either catgirls or Belldandies should lead to a natural separation of the sexes would not occur. (Is the moral that such people should not be allowed to define gods? Duh.) If I had a catgirl/non-sentient sexbot, that would not make me spend less time with true female friends, or stop calling my mother (were she still alive.) Catgirl doesn't play Settler of Catan or D&D or talk about politics. A Belldandy might, in the sense that finding a perfect mate often leads to spending less time with friends, but it still needn't mean being happy with them being cut off, or being unreceptive to meeting new friends of either sex.

So yeah, it's a pretty bad utopia, defensible only in the "hey, not dying or physically starving" way. But it's implausibly bad, because it could be so much better by doing less work: immortalize people on Earth, angelnet Earth, give people the option of summoning an Idealized Companion. Your AI had to go to more effort for less result, and shouldn't have followed this path if it had any consultation with remotely normal people. (Where are the children?)

The point is, I believe, that we value things in ways not reducible to "maximising our happiness". Here Love is the great example, often we value it more than our own happiness, and also the happiness of the beloved. We are not constituted to maximise our own happiness, natural selection tells you that.

You know I cant help but read this a victory for humanity. Not a full victory, but i think the probability of some sort of interstellar civilization that isn't a dystopia is is higher afterwords then before, if nothing else we are more aware of the dangers of AI, and anything that does that and leaves a non-dystopian civilization capable of makeing useful AI is mostlikely a good thing by my utility function.

One thing that does bug me is I do not value happiness as much as most people do. Maybe I'm just not as empathetic as most people? I mean I acutely hope that humanity is replaced by a decenent civilisation/spieces that still values Truth ans Beauty, I care a lot more weather they are successful then if they are happy.

I wonder how much of the variance in preference between this and others could be explained by weather they are single (i.e I don't have some one they love to the point of "I don't want to consider even trying to live with someone else") vs. those that do.

I would take it, I imagine I would be very unhappy for a few months. (It feels like it would take years but thats a well known bias).

I assume "verthandi" is also not a coincidence. "verthandi"

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31