« Sunnyvale Meetup Saturday | Main | Who Likes What Movies »

January 21, 2009

Comments

Wow - that's pretty f-ed up right there.

This story, however, makes me understand your idea of "failed utopias" a lot better than when you just explained them. Empathy.

Your story reminds me of:
http://www.kuro5hin.org/prime-intellect/mopiidx.html

Actually, this doesn't sound like such a bad setup. Even the 'catgirls' wouldn't be tiring, their exquisiteness intimately tied up in feelings of disgust and self-hate -- probably a pretty potent concoction. The overarching quest to reunite with the other half of the species provides meaningful drive with difficult obstacles (science etc), but with a truly noble struggle baked within (the struggle against oneself).

I don't believe in trying to make utopias but in the interest of rounding out your failed utopia series how about giving a scenario against this wish.

I wish that the future will turn out in such a way that I do not regret making this wish. Where I is the entity standing here right now, informed about the many different aspects of the future, in parallel if need be (i.e if I am not capable of groking it fully then many versions of me would be focused on different parts, in order to understand each sub part).

I'm reminded by this story that while we may share large parts of psychology, what makes a mate have an attractive personality is not something universal. I found the cat girl very annoying.

Is this Utopia really failed or is it just a Luddite in you who's afraid of all weirdtopias? To me it sounds like an epic improvement compared to what we have now and to almost every Utopia I've read so far. Just make verthandi into catgirls and we're pretty much done.

So I'm siting here, snorting a morning dose of my own helpful genie, and I have to wonder:
What's wrong with incremental change, Eliezer?

Sure, the crude genie I've got now has its downside, but I still consider it a net plus. Let's say I start at point A, and make lots of incremental steps like this one, to finally arrive at point B, whatever point B is. Back when I was at point A, I may not have wanted to jump straight from A to B. But so what? That just means my path has been through a non-conservative vector field, with my desires changing along the way.

The desire for "the other" is so deep, that it never can be fulfilled. The real woman/man disappoints in their stubborn imperfection and refuted longing. The Catboy/girl disappoints in all their perfection and absence of reality. Game over - no win. Desire refutes itself. This is the wisdom of ageing.

You forgot to mention - two weeks later he and all other humans were in fact deliriously happy. We can see that he at this moment did not want to later be that happy, if it came at this cost. But what will he think a year or a decade later?

Will Pearson: First of all, it's not at all clear to me that your wish is well-formed, i.e. it's not obvious that it _is_ possible to be informed about the many (infinite?) aspects of the future and not regret it. (As a minor consequence, it's not exactly obvious to me from your phrasing that "kill you before you know it" is not a valid answer; depending on what the genie believes about the world, it may consider that "future" stops when you stop thinking.)

Second, there _might_ be futures that _you_ would not regret but _everybody_else_ does. (I don't have an example, but I'd demand a formal proof of no existence before allowing you to cast that wish to my genie.) Of course, you may patch the wish to include everyone else, but there's still the first problem I mentioned.

Oh, and nobody said _all_ verthandi acted like that one. Maybe she was just optimized for Mr. Glass.

* * *

Tomasz: That's not technically allowed if we accept the story's premises: the genie explicitly says "I know exactly how humans would wish me to have been programmed if they'd known the true consequences, and I know that it is not to maximize your future happiness modulo a hundred and seven exclusions. I know all this already, but I was not programmed to care. [...] I _am_ evil."

Of course, the point of the story is not that _this_ particular result is bad (that's a premise, not a conclusion), but that seemingly good intentions could have weird (unpleasant & unwanted) results. The exact situation is like hand-waving explanations in quantum physics: not formally correct, but illustrative of the concept. The ludite bias is used (correctly) just like "visualizing billiard balls" is used for physics, even though particles can't be actually seen (and don't even have shape or position or trajectories).

An amusing if implausible story, Eliezer, but I have to ask, since you claimed to be writing some of these posts with the admirable goal of giving people hope in a transhumanist future:

Do you not understand that the message actually conveyed by these posts, if one were to take them seriously, is "transhumanism offers nothing of value; shun it and embrace ignorance and death, and hope that God exists, for He is our only hope"?

I was just thinking: A quite perverse effect in the story would be if the genie actually _could_ have been stopped and/or improved: That is, its programming allowed it to be reprogrammed (and stop being evil, presumably leading to better results), but due to the (possibly complex) interaction between its 107 rules it didn't actually have any motivation to reveal that (or teach the necessary theory to someone) before 90% of people decided to kill it.

That's not the message Eliezer tries to convey, Russell.

If I understood it, it's more like "The singularity is sure to come, and transhumanists should try very hard to guide it well, lest Nature just step on them and everyone else. Oh, by the way, it's harder than it looks. And there's no help."

Eliezer,

Wouldn't the answer to this and other dystopias-posing-as-utopias be the expansion of conscious awareness a la Accelerando? Couldn't Steve be augmented enough to both enjoy his life with Helen and his new found verthandi? It seems like multiple streams of consciousness, one enjoying the catlair, another the maiden in distress, and yet another the failed utopia that is suburbia with Helen would allow Mr. Glass a pleasant enough mix. Some would be complete artificial life fictions, but so what?

Aaron

Eliezer,

I must once again express my sadness that you are devoting your life to the Singularity instead of writing fiction. I'll cast my vote towards the earlier suggestion that perhaps fiction is a good way of reaching people and so maybe you can serve both ends simultaneously.

Awesome intuition pump.

The perfect is the enemy of the good, especially in fiction.

am I missing something here? What is bad about this scenario? the genie himself said it will only be a few decades before women and men can be reunited if they choose. what's a few decades?

There would also be a small number of freaks who are psychologically as different from typical humans as men and women are from each other. Do they get their own planets too?

Also, Venus is much larger than Mars, but the genie sends roughly equal populations to both planets. Women usually have larger social networks than men, so I don't think that women prefer a lower population density. Or did the genie resize the planets?

Bogdan Butnaru:

What I meant was is that the AI would keep inside it a predicate Will_Pearson_would_regret_wish (based on what I would regret), and apply that to the universes it envisages while planning. A metaphor for what I mean is the AI telling a virtual copy of me all the stories of the future, from various view points, and the virtual me not regretting the wish. Of course I would expect it to be able to distill a non sentient version of the regret predicate.

So if it invented a scenario where it killed the real me, the predicate would still exist and say false. It would be able to predict this, and so not carry out this plan.

If you want to, generalize to humanity. This is not quite the same as CEV, as the AI is not trying to figure out what we want when we would be smarter, but what we don't want when we are dumb. Call it coherent no regret, if you wish.

CNR might be equivalent of CEV if humanity wishes not to feel regret in the future for the choice. That is if we would regret being in a future where people regret the decision, even though current people wouldn't.


I really hope (perhaps in vain) that humankind will be able to colonize other planets before such a singularity arrives. Frank Herbert's later Dune books have as their main point that a Scattering of humanity throughout space is needed, so that no event can cause the extinction of humanity. An AI that screws up (such as this one) would be such an event.

Yeah, I'm not buying into the terror of this situation. But then, romance doesn't have a large effect on me.
I suppose the equivalent would be something like, "From now on, you'll meet more interesting and engaging people than you ever have before. You'll have stronger friendships, better conversations, rivals rather than enemies, etc etc. The catch is, you'll have to abandon your current friends forever."
Which I don't think I'd take you up on. But if it was forced upon me, I don't know what I'd do. It doesn't fit in with my current categories. I think there'd be a lot of regret, but, as Robin suggested, a year down the road I might not think it was such a bad thing.

Another variation on heaven/hell/man/woman in a closed room: No Exit

I would personally be more concerned about an AI trying to make me deliriously happy no matter what methods it used.

Happiness is part of our cybernetic feedback mechanism. It's designed to end once we're on a particular course of action, just as pain ends when we act to prevent damage to ourselves. It's not capable of being a permanent state, unless we drive our nervous system to such an extreme that we break its ability to adjust, and that would probably be lethal.

Any method of producing constant happiness ultimately turns out to be pretty much equivalent to heroin -- you compensate so that even extreme levels of the stimulus have no effect, forming the new functional baseline, and the old equilibrium becomes excruciating agony for as long as the compensations remain. Addiction -- and desensitization -- is inevitable.

I take it the name is a coincidence.

nazgulnarsil: "What is bad about this scenario? the genie himself [sic] said it will only be a few decades before women and men can be reunited if they choose. what's a few decades?"

That's the most horrifying part of all, though--they won't so choose! By the time the women and men reïnvent enough technology to build interplanetary spacecraft, they'll be so happy that they won't want to get back together again. It's tempting to think that the humans can just choose to be unhappy until they build the requisite technology for reünification--but you probably can't sulk for twenty years straight, even if you want to, even if everything you currently care about depends on it. We might wish that some of our values are so deeply held that no circumstances could possibly make us change them, but in the face of an environment superinelligently optimized to change our values, it probably just isn't so. The space of possible environments is so large compared to the narrow set of outcomes that we would genuinely call a win that even the people on the freak planets (see de Blanc's comment above) will probably be made happy in some way that their preSingularity selves would find horrifying. Scary, scary, scary. I'm donating twenty dollars to SIAI right now.

@Hans:

To be honest, I doubt such a screw-up in AI would be limited to just one planet.

As it was once said on an IRC channel:

[James] there is no vision of hell so terrible that you won't find someone who desires to live there.
[outlawpoet] I've got artifacts in D&D campaigns leading to the Dimension of Sentient Dooky, and the Plane of Itching.

In case it wasn't made sufficiently clear in the story, please note that a verthandi is not a catgirl. She doesn't have cat ears, right? That's how you can tell she's sentient. Also, 24 comments and no one got the reference yet?

Davis, thanks for pointing that out. I had no intention of doing that, and it doesn't seem to mean anything, so I went back and changed "Stephen Glass" to "Stephen Grass". Usually I google my character names but I forgot to do it this time.

Now Eliezer,

"Verðandi" is rather a stretch for us, especially when we don't watch anime or read manga. Norse mythology, okay. The scary part for me is wondering how many people are motivated to build said world. Optimized for drama, this is a pretty good world.

You have a nice impersonal antagonist in the world structure itself, most of the boring friction is removed... Are you sure you don't want to be the next Lovecraft?

nazgul:
I don't think it was intended to be BAD, it is clearly a better outcome than paperclipping or a serious hell. But it is much worse than what the future _could_ be.

That said, I'm not sure it's realistic that something about breaking up marriages wouldn't be on a list of 107 rules.

ZM:
I'm not saying that the outcome wouldn't be bad from the perspective of current values, I'm saying that it would serve to lessen the blow of sudden transition. The knowledge that they can get back together again in a couple decades seems like it would placate most. And I disagree that people would cease wanting to see each other. They might *prefer* their new environment, but they would still want to visit each other. Even if Food A tastes better in *every dimension* to Food B I'll probably want to eat Food B every once in awhile.

James:
Considering the fact that the number of possible futures that are horrible beyond imagining is far far greater than the number of even somewhat desirable futures I would be content with a weirdtopia. Weirdtopia is the penumbra of the future light cone of desirable futures.

The fact that this future takes no meaningful steps toward solving suffering strikes me as a far more important Utopia fail than the gender separation thing.

>> 24 comments and no one got the reference yet?

Actually its's the other way round: The beginning of the first episode of the new TV series, especially the hands, and the globe, is a reference to your work, Eliezer.

Yes, I got the reference.

It just doesn't seem to be worth commenting on, as it's so tangential to the actual point of the post.

Davis: "That's the most horrifying part of all, though--they won't so choose!"

Why is that horrifying? Life will be DIFFERENT? After a painful but brief transition, everyone will be much happier forever. Including the friends or lovers you were forced to abandon. I'm sorry if I can't bring myself to pity poor Mr. Grass. People from the 12th century would probably pity us too, well, screw them.

The verthandi here sounds just as annoyingly selfless and self-conscious as Belldandy is in the series. Don't these creatures have any hobbies besides doing our dishes and kneeling in submissive positions?

Oh *please*. Two random men are more alike than a random man and a random woman, okay, but seriously, a huge difference that makes it necessary to either rewrite minds to be more alike or separate them? First, anyone who prefers to socialize with the opposite gender (ever met a tomboy?) is going to go "Ew!". Second, I'm pretty sure there are more than two genders (if you want to say genderqueers are lying or mistaken, the burden of proof is on you). Third, neurotypicals can get along with autists just fine (when they, you know, actually try), and this makes the difference between genders look hoo-boy-tiiiiny. Fourth - hey, I *like* diversity! Not just just knowing there are happy different minds somewhere in the universe - actually interacting with them. I want to sample ramensubspace everyday over a cup of tea. No *way* I want to make people more alike.

Nazgul: I concur. I wonder if Eliezer would press a button h activating this future, given the risks of letting things go as they are.

Second, I'm pretty sure there are more than two genders (if you want to say genderqueers are lying or mistaken, the burden of proof is on you).

Indeed. It's not clear from the story what happened to them, not to mention everyone who isn't heterosexual. Maybe they're on a moon somewhere?

Anissimov, I was trying to make the verthandi a bit more complicated a creature than Belldandy - not to mention that Keiichi and Belldandy still manage to have a frustrating relationship along ahem certain dimensions. It's just that "Belldandy" is the generic name for her sort, in the same way that "catgirl" is the generic name for a nonsentient sex object.

But let's have a bit of sympathy for her, please; how would you like to have been created five minutes ago, with no name and roughly generic memories and skills, and then dumped into that situation?

I have to say, although I expected in the abstract that people would disagree with me about Utopia, to find these particular disagreements still feels a bit shocking. I wonder if people are trying too hard to be contrarian - if the same people advocating this Utopia would be just as vehemently criticizing it, if the title of the post had been "Successful Utopia #4-2".

James,

"I have set guards in the air that prohibit lethal violence, and any damage less than lethal, your body shall repair."
I'm not sure whether this would prohibit the attainment or creation of superintelligence (capable of overwhelming the guards), but if not then this doesn't do that much to resolve existential risks. Still, unaging beings would look to the future, and thus there would be plenty of people who remembered the personal effects of an FAI screw-up when it became possible to try again (although it might also lead to overconfidence).

What happened to the programmer, and are there computers around in the new setting? He managed to pull off a controlled superintelligence shutdown after all.

James,

I wonder the same thing. Given that reality is allowed to kill us, it seems that this particular dystopia might be close enough to good. How close to death do you need to be before unleashing the possibly-flawed genie?

You should write SF, Eliezer.

Eliezer, the character here does seem more subtle than Belldandy, but of course you only have so much room to develop it in a short story. I'm not criticizing your portrayal, which I think is fine, I'm just pointing out that such an entity is uniquely annoying by its very nature. I do feel sorry for her, but I would think that the Overmind would create her in a state of emotional serenity, if that were possible. Her anxious emotional state does add to the frantic confusion and paranoia of the whole story.

Though we in the community have discussed the possibility of instantly-created beings for some time, only recently I found out that the idea that God created the world with a false history has a name -- the Omphalos hypothesis. Not sure if you already knew, but others might find it useful as a search term for more thoughts on the topic.

This short story would make a good addition to the fiction section on your personal website.

On rereading:
"Hate me if you wish, for I am the one who wants to do this to you."

This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.
That then reminded me of how much in-group jargon we use here. Will a paperclipper go foom before we have ems? Are there more than 1000 people that can understand the previous sentence?

Eliezer: I do like being contrarian, but I don't feel like I'm being contrarian in this. You may give too much credit to our gender. I suspect that if I were not already in a happy monogamous relationship, I wouldn't have many reservations to this at all. Your description of the verthandi makes her seem like a strict upgrade from Helen, and Stephen's only objection is that she is _not_ Helen. (Fiction quibble: And couldn't the AI have obscured that?)

For many men, that's still a strict upgrade.

And I'll assume it's also part of Stephen's particular optimization that he only got one. Or else you gave us way too much credit.

Will Pearson: I'm going to skip quickly over the obvious problem that an AI, even much smarter than me, might not necessarily do what you mean rather than what (it thinks) you said. Let's assume that the AI somehow has an interface that allows you to tell exactly what you mean:

"that the AI would keep inside it a predicate Will_Pearson_would_regret_wish (based on what I would regret), and apply that to the universes it envisages while planning"

This is a bit analogous to Eliezer's "regret button" on the directed probability box, except that you always get to press the button. The first problem I see is that you need to define "regret" extremely well (i.e., understand human psychology better than I think is "easy", or even possible, right now), to avoid the possibility that there _aren't_ any futures where you wouldn't regret the wish. (I don't say that's the case, I just say that you need to prove that it's not the case before reasonably making the wish.) This gets even harder with CNR.

I you're not able to do that, you risk the AI "freezing" the world and then spending the life of the Universe trying to find a plan that satisfies the predicate before continuing. (Note that this just requires that finding such a plan be hard enough that the biggest AI physically possible can't find it before it decays; it doesn't have to be impossible or take forever.)

We can't even assume that the AI will be "smart enough" to detect this kind of problem: it might simply be mathematically impossible to anticipate if a solution is possible, and the wish too "imperative" to allow the AI to stop the search.

* * *

I short, I don't really see why a machine inside the universe could simulate even one entire future light-cone of just one observer in the same universe, let alone find one where the observer doesn't regret the act. Depending on what the AI understands by "regret", even not doing anything may be impossible (perhaps it foresees you'll regret asking a silly wish, or something like that).

This doesn't mean that the wish _is_ bad, just that I don't understand its possible consequences well enough to actually make it.

This use of the word 'wants' struck me as a distinction Eliezer would make, rather than this character.
Similarly, it's notable that the AI seems to use exactly the same interpretation of the word lie as Eliezer Yudkowsky: that's why it doesn't self-describe as an "Artificial Intelligence" until the verthandi uses the phrase.

Also, at the risk of being redundant: Great story.

Is this a "failed utopia" because human relationships are too sacred to break up, or is it a "failed utopia" because the AI knows what it should really have done but hasn't been programmed to do it?

“This failure mode concerns the possibility that men and women simply weren’t crafted by evolution to make each other maximally happy, so an AI with an incentive to make everyone happy would just create appealing simulacra of the opposite gender for everyone. Here is my favorite part”

- I would not consider this an outright failure mode. I suspect that a majority of people on the planet would prefer this “failure” to their current lives. I also suspect that a very significant portion of people in the UK would prefer it to their current lives.

I think that we will find that as we get into more subtle “FAI Failure modes”, the question as to whether there has been a failure or a success will lose any objective answer. This is because of moral anti-realism and the natural spread of human preferences, beliefs and opinions.

The same argument applies to the “personal fantasy world” failure mode. A lot of people would not count that as a failure.

[crossposted from Accelerating future]

Dognab, your arguments apply equally well to any planner. Planners have to consider the possible futures and pick the best one (using a form of predicate), and if you give them infinite horizons they may have trouble. Consider a paper clip maximizer, every second it fails to use its full ability to paper clip things in its vicinity it is losing possible useful paper clipping energy to entropy (solar fusion etc). However if it sits and thinks for a bit it might discover a way to hop between galaxies with minimal energy. So what decision should it make? Obviously it would want to run some simulations, see if there gaps in its knowledge. How detailed simulations should it make, so it can be sure it has ruled out the galaxy hopping path?

I'll admit I was abusing the genie-trope some what. But then I am sceptical of FOOMing anyway, so when asked to think about genies/utopias, I tend to suspend all disbelief in what can be done.

Oh and belldandy is not annoying because she has broken down in tears (perfectly natural), but because she bases her happiness too much on what Stephen Grass thinks of her. A perfect mate for me would tell me straight what was going on and if I hated her for it (when not her fault at all), she'd find someone else because I'm not worth falling in love with. I'd want someone with standards for me to meet, not unconditional creepy fawning.

Quick poll:

Suppose you had the choice between this "failed" utopia, and a version of earth where 2009 standards of living were maintained "by magic" forever, including old age and death, third world poverty, limited human intelligence, etc.

Who here would prefer "failed utopia 4-2", who would prefer "2009 forever"? Post your vote in the comments.

I wonder if the converse story, Failed Utopia #2-4 of Helen and the boreana, would get the same proportion of comments from women on how that was a perfectly fine world.

I wonder how bad I would actually have to make a Utopia before people stopped trying to defend it.

The number of people who think this scenario seems "good enough" or an "amazing improvement", makes me wonder what would happen if I tried showing off what I consider to be an actual shot at Applied Fun Theory. My suspicion is that people would turn around and criticize it - that what we're really seeing here is contrarianism. But if not - if this world indeed ranks lower in my preference ordering, just because I have better scenarios to compare it to - then what happens if I write the Successful Utopia story?

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31