« Conscious Control | Main | Share likelihood ratios, not posterior beliefs »

February 05, 2009

Comments

Nick,

There is a tendency for some folks to distinguish between descriptive and normative statements, in the sense of 'one cannot derive an ought from an is' and whatnot. A lot of this comes from hearing about the "naturalistic fallacy" and believing this to mean that naturalism in ethics is dead. Naturalists in turn refer to this line of thinking as the "naturalistic fallacy fallacy", as the strong version of the naturalistic fallacy does not imply that naturalism in ethics is wrong.

As for the fallacy you mention, I disagree that it's a fallacy. It makes more sense to me to take "I value x" and "I act as though I value x" to be equivalent when one is being honest, and to take both of those as different from (an objective statement of) "x is good for me". This analysis of course only counts if one believes in akrasia - I'm really still on the fence on that one, though I lean heavily towards Aristotle.

So, what about the fact that all of humanity now knows about the supernova weapon? How is it going to survive the next few months?

Reading the comments, I find that I feel more appreciation for the values of the Superhappies than I do for the values of some OB readers.

This probably mostly indicates that Eliezer's aliens aren't all that terribly alien, I suppose.

@Wei:
It's just another A-Bomb, only bigger. By now, they must have some kind of policy that limits problems from A-Bombs and whatever other destructive thingies they have. On the other hand, the damage from blowing up the Sol is even more catastrophic than just blowing up any world: it shatters the humanity, with no prospect of reunion.

Nick, note that he treats the pebblesorters in parallel with the humans. The pebblesorters' values lead them to seek primeness and Eliezer optimistically supposes that human values lead humans to seek an analogous rightness.

What Eliezer is trying to say in that post, I think, is that he would not consider it right to eat babies even conditional on humanity being changed by the babyeaters to have their values.

But the choice to seek rightness instead of rightness' depends on humans having values that lead to rightness instead of rightness'.

Simon: Well, the understanding I got from all this was that human development would be sufficiently tweaked so that the "Babies" that humans would end up eating would not actually be, nor ever have yet been conscious. Non conscious entities don't seem to really be too tied to any of my terminal values, near as I can tell.

Of course, if the alteration was going to lead to us eating conscious babies, that's a whole other thing, and if that was the case, I'd say "blow up Huygens twice as hard, then blow it up again just to be sure."

However, this seems unlikely, given that the whole point of that part of the deal was to give _something_ to the babyeaters in return for tweaking them so that they eat their babies before they're conscious or whatever. The whole thing would end up more or less completely pointless from (near as I can tell) even a SuperHappy point of view if they simply exchanged us for the babyeaters. That would just be silly... in a really horribly disturbing way.

I agree with tarleton I think. Can someone briefly summarize what is so objectionable about the superhappy compromise? It seems like a great solution in my view.what of importance is humanity actually giving up? They have to eat non-sentient children. Hard to see why we should care about that when we will never once feel a Gag reflex and no pain is caused to anyone. Art and science will advance not retreat due to superhappy technology being applied to it. The sex will be better and there will be other cool new emotions which will have positive value to us. The solution is not sphexish in a singularity fun sense and immediately After modification any lingering doubts won't exist. I must be missing some additional aspect of life that people think will get lost? I would not be at all surprised if humanity's cev makes them essentially the superhappy people.

Dan: Obviously part 8 is the 'Weirdtopia' ending!

(I mean, we've had utopia, dystopia, and thus by Eliezer's previous scheme we are due for a weirdtopia ending.)

This lurker has objections to being made to eat his own children and being stripped of pain: SH plan is not a compromise, but an order. From the position of authority, they can make us agree to anything by debate or subterfuge or outright exercise of power; the mere fact that they seem so nice and reasonable changes nothing about their intentions, which we do not know and which we cannot trust. How do we know that the SH ship's crew are true representatives of the rest of their race? Why is it that they seemingly trust/accept Akon as the representative of the Entire Human Race? I think the attitude of Niven's ARM Paranoids is proper here ("Madness Has Its Place").

As an aside, I am glad that I have read the Motie books, and even more glad that I happened to start watching Fate/Stay Night last week. To be this entertained, I would have to be my teenaged self reading The Fountainhead and The Mote in God's Eye for the first time and simultaneously. Thank you, Eliezer, for making me mull alternative ethics and lol simultaneously for the first time.

Don't expand this into a novel, it was superb but I'd rather see a wider variety of short works exploring many related themes.

Perhaps this is just me not buying the plot justifications that set up the strategic scenario, but I would be included to accept the SupperHappy deal because of a concern that the next species that comes along might have high technology and not be so friendly. I want the defense of the increased level of technology, stat. Sure it involves giving up some humanity but better than giving up all of humanity. Once I find that there are 2 alien species with star travel, I get really really worried about the 3rd, 4th, etc. Maybe one of them comes from a world w/o SIAI, w/o Friendly AI, and it is trying to paperclip the universe. Doubling even faster than the SuperHappys because it doesn't stop for sex (it has rewritten its utility function so paperclipping and acts that facilitate maximal speed of paperclipping are sex).

I would accept the changes to human nature implied by the SuperHappy deal to prevent being paperclipped.

I agree that this section of the story feels a bit rushed, but maybe that is the intention.

I don't really like how easily these people in high positions of authority are folding under the pressure. The President in particular was taken out with what was to me very little provocation.

Plus, I just can't relate to a human race that is suicidally attached to preserving its pain and hardships. The offer made by the Superhappies is just not that bad.

Eliezer tries to derive his morality from stated human values.

In theory, Eliezer's morality (at least CEV) is insensitive to errors along these lines, but when Eliezer claims "it all adds up to normality," he's making a claim that is sensitive to such an error.

I agree that deriving morality from stated human values is MUCH more ethically questionable than deriving it from human values, stated or not, and suggest that it is also more likely to converge. This creates a probable difficulty for CEV.

It seems to me that if it's worth destroying Huygens to stop the Superhappies it's plausibly worth destroying Earth instead to fragment humanity so that some branch experiences an infinite future so long as fragmentation frequency exceeds first contact frequency. Without mankind fragmented, the normal ending seems inevitable with some future alien race. Shut-up-and-multiply logic returns error messages with infinite possible utilities, as Peter has formally shown, and in this case it's not even clear what should be multiplied.

Psy-Kosh: I was using the example of pure baby eater values and conscious babies to illustrate the post Nick Tarleton linked to rather than apply it to this one.

Michael: if it's "inevitable" that they will encounter aliens then it's inevitable that each fragment will in turn encounter aliens, unless they do some ongoing pre-emptive fragmentation, no? But even then, if exponential growth is the norm among even some alien species (which one would expect) the universe should eventually become saturated with civilizations. In the long run, the only escape is opening every possible line from a chosen star and blowing up all the stars at the other ends of the lines.

Hmm. I guess that's an argument in favour of cooperating with the superhappies. Though I wonder if they would still want to adopt babyeater values if the babyeaters were cut off, and if the ship would be capable of doing that against babyeater resistance.

It's interesting to note that those oh-so-advanced humans prefer to save children to saving adults, even though there don't seem to be any limits to natural lifespan anymore.
At our current tech-level this kind of thing can make sense because adults have less lifespan left; but without limits on natural lifespan (or neural degradation because of advanced age) older humans have, on average, had more resources invested into their development - and as such should on average be more knowledgeable, more productive and more interesting people.
It appears to me that the decision to save human children in favor of adults is a result of executing obsolete adaptions as opposed to shutting up and multiplying. I'm surprised nobody seems to have mentioned this yet - am I missing something obvious?

Sebastian,

Here there is an ambiguity between 'bias' and 'value' that is probably not going to go away. EY seems to think that bias should be eliminated but values should be kept. That might be most of the distinction between the two.

Are bodily pain and embarrassment really that important? I'm rather fond of romantic troubles, but that seems like the sort of thing that could be negotiated with the superhappies by comparing it to their empathic pain. It also seems like the sort of thing that could just be routed around, by removing our capacity to fall out of love and our preference for monogamy and heterosexuality.

The problem with much of the analysis is that the culture already has mutated enough to allow for forcible rape to become normative.

I'm not sure that the supperhappy changes as to "romantic troubles" are much more change than that.

humanity is doomed in this scenario. the Lotuseaters are smarter and the gap is widening. Theres no chance humans can militarily defeat them now or any point in the future. as galactic colonization continues exponentially, eventually they will meet again, perhaps in the far future. but the Lotusfolk will be even stronger relatively at that point. the only way humans can compete is developing an even faster strong-AI, which carries large chance of ending humanity on its own.
so the choices are:
-accept Lotusfolk offer now
-blow up the starline, continue expanding as normal, delay the inevitable
-blow up the starline, gamble on strong AI, hopefully powering-up human civ to the point it can destroy the Lotusfolk when they meet again

this choice set is based on the assumption that the decider values humanity for its own sake. I value raw intelligence, the chassis notwithstanding. so the only way I would not choose option 1 is if I thought that the Lotusfolk, while smarter currently, were disinclined to develop strong-AI and go exponential, and thus w humanity under their dominion, no one would. if humans could be coaxed into building strong AI in order to counter the looming threat of Lotusfolk assimiliation, and thus create something smarter than any of the 3 species combined, then I would choose option 3.

This was an interesting story, though I wonder if the human capitulation either option offers is the only option - bluntly, the superhappys don't strike me as being that tough, even if their technology is higher and development is orders of magnitude faster than ours they are completely unwilling to accept suffering even if it comes through their own sense of empathy, all humans have to do is offer a credible threat of superhappy suffering and convince them to modify themselves not to care about our suffering. i.e. "We will resist you every step of the way thus maximizing our suffering, plus you cannot be 100% sure you'll be able to convert us without us inflicting at least some harm"

Hm I think the spam guard ate my last comment so I'll repeat:

I don't think the SH are really up to converting an unwilling humanity despite all their superiority they are fundamentally unwilling to be inconvenienced so humans only have to successfully argue their case by pointing out the probable mass suicides depicted in the alternate ending and that SH society might take some casualties, since they are almost completely risk averse even the possibility of losing a single ship might be enough to scare them off.

It's a bit like the world being unwilling to intervene in North Korea despite the overwhelming advantage, it's just not worth a single life lost to us.

Given the SH's willingness to self modify it would be easier to convince them to ratchet down their empathy for us to tolerable levels

Bugger there's my original comment after all. Whoops.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31