« Thoughtful Music | Main | Lying Stimuli »

February 03, 2009


Within the confines of the story:
* No star that has been visited by starline has ever been seen from another, which implies a vastly larger universe than can be seen from a given lightcone. Basically, granting the slightly cryptographic assumption that travel between stars is impossible.
* The weapon is truly effective: works as advertised.

Any disagreement with that would have to say why """ 'Assume there is no god, then...' "But there _is_ a god!" """ fallacy doesn't apply here.

The threat of a nova feels like a more interesting avenue than the mere detonation.

You guys are very trusting of super-advanced species who already showed a strong willingness to manipulate humanity with superstimulus and pornographic advertising.

I'm not planning to trust anyone. My suggestion was based on the assumption that it is possible to watch what the Superhappys actually do and detonate the star if they start heading for the wrong portal. If that is not the case (which depends on the mechanics of the Alderson drive) then either detonate the local star immediately, or the star one hop back.

Wanna see incredibly intelligent people wasting time on absurd meaningless questions? Come here to Overcoming Bias! A stupid person will believe in any old junk, but it takes someone very intelligent to specifically believe in such elaborate nonsense.

One can only wonder what that might imply about those wise folk who have recognised all of this as nonsense, yet continue to read and even respond to it.


I'm not planning to trust anyone. My suggestion was based on the assumption that it is possible to watch what the Superhappys actually do and detonate the star if they start heading for the wrong portal.

Once you know someone has technologies vastly ahead of your own, you might as well assume they can do your worst nightmares - because your imagination and assumptions are unlikely to present limits to their capabilities.

Imagine a group of humans circa 800 A.D. making assumptions about how they will be tracked down by a team of modern day soldiers with advanced communications, GPS, satellite imagery, airborne drones, camouflaged clothing, accurate weapons, poison gas, ... and those soldiers aren't even biologically or intellectually more advanced.

If I were the humans, I'd report back to earth (they have valuable information), then send out a robotic probe through the Alderson drive and blow up the star.

The humnans in this story know that there are at least two alien cultures, and the culture shock from them is too much to deal with. If there are more cultures, it will be worse.

Another possibility would be to blow yup Earth's sun. This fragments the human species, but increases the probability that some branches of humanity will survive.

Oh boy,

I do not care if anyone creates art, but i do care if sentient beings are hurt.
The Babyeater way of living is basically like a social accepted Gulag, only worse.
And evidentially the Happy see humanity the same way.

Now what i also don't like is collectivism. Even the super happies seem rather singleminded, and pretty willing to make decisions for their whole species.

Now despite not fully understanding super happy ethics, and not trying to break the story my proposal would be:
The superhappies offer the living Babyeaters to change them, and will nevertheless rescue each and every baby from being eaten. Then these kids get the choice to return home at any time later (no idea, if they would be accepted) or live with da happies, while also being offered treatment/change for their condition.

[Readers should be aware, that with some searching it would be possible to find human cultures with similar ethics in the past. Think samurai, or holy warriors]

The same solution also works perfect for the humans. Offer treatment, protect the kids.
The happies might be able to accept pain that lasts only seconds, but will prevent any form of child abuse.

Now that sounds like an awful lot of work, but i think the happies might be able to pull it off, and of course its the only ethical thing to do that i can think off.

The alternative of killing sentient beings is cruel, no matter what.


The deal Akon reached with the super happies is so preposterously one-sided it is no surprise at all the babyeaters did not agree to it- and that could have been foreseen. For either humans or the babyeaters to even consider destroying their identity so the super happies will make art and jokes is absurd. For people, at least, self-identity is vastly more important then overall utility. Super happy art and jokes are worth basically nothing to the babyeaters and humans. If the super happies want humans to switch off physical pain, embarrassment etc. they should agree to 1. unconditional sharing of every technological advancement they make, 2. allow the individual adult humans the option of turning pain etc. back on, 3. do our baby eating for us. But thats just a suggestion- the main problem is that the chances of the super happies nailing the fairest possible deal on their first guess is astoundingly unlikely. Even with complete knowledge of human and babyeater culture their knowledge is phenomenologically inadequate for coming up with a deal that is actually fair to all. Not negotiating was irrational, as was failing to contact the babyeaters to get their thoughts on the deal before agreeing to it- three-party deals require three-party negotiations.

That the Confessor didn't step in sooner... is kind of ruining the story for me. I'm not sure if these issues were brushed aside to make your point or if you really don't understand how absurd this deal is.

Stop the superhappies' ship before it jumps out! They must not learn of humanity's existence. Use the Alderson drive if necessary.

First, with regards to the solution proposed by the superhappies, my thought would have been, right at the start, this:

Accept _IF_ they can ensure the following: For us, the change away from pain doesn't end up having indirect effects that, well, more or less screw up other aspects of our development. ie, one of the primary reasons why humanity might have been very cautious in the first place with regards to such changes.

With regards to the business of us changing to more resemble babyeaters, can they simultaneously ensure that the eaten children will not have, at any point, been conscious? And can they ensure that the delayed consciousness (not merely self awareness, but consciousness, period) doesn't negatively impact, in other ways, human development?

Further, can they ensure that making us, well, babyeater like does _NOT_ otherwise screw with our sympathy and compassion?

_IF_ all of the above can be truly answered "yes", then (in my view) the price that humanity would pay would not really be all that bad.

Of course, we have to then ask about the changes to the babyeaters? Presumably, the ideal would be something like "delay onset of consciousness until after the culling (and not at all, of course, for those that are eaten)", but in such a way that intelligence and learning is still there, and when the babyeater becomes conscious, it can integrate data and experience acquired while it was not conscious.

But, a question arises, a possibly very important one: Should the Superhappies firing on the Babyeater ship be considered evidence that Superhappies are Prisoner's Dilemma _defectors_?

If yes, then how much can we trust the Superhappies to actually implement the solution they proposed, rather than do something entirely different? And _THAT_ consideration would be perhaps the only consideration (I can think of so far) for really considering the "blow up a star to close down the paths leading to humanity's worlds" option (post Babyeater fix, perhaps))

If the humans know how to find the babyeaters' star,
and if the babyeater civilization can be destroyed by blowing up one star,

then I would like to suggest that they kill off the babyeaters.

Not for the sake of the babyeaters (I consider the proposed modifications to them better than annihilation from humanity's perspective)
but to prevent the super-happies from making even watered down modifications adding baby-eater values -
not so much to humans, since this can also be (at least temporarily) prevented by destroying Huygens -
but to themselves, as they are going to be the dominant life form in the universe over time, being the fastest growing and advancing species.

Of course, relative to destroying Huygens the price to pay in terms of modifications to human values is high, so I would not make this decision lightly.

Is this story self-consistent? Consider that:

(i) it's easy to make stars go nova.

(ii) when a star goes nova, its Alderson lines disappear, disconnecting parts of the network from each other, and stopping a war if the different sides are no different parts of it (the fact that the network is sparce is important here)

(iii) both Babyeaters and the Superhappies know this

(iv) nevertheless the Superhappies still plan to prosecute a war against the babyeaters

Well. I guess that stunning the Pilot is a reasonable thing to do, since he is obviously starting to act anti-socially. That is not the point though. Two things strike me as a bit silly, if not outright irrational.

First is about the babyeaters. Pain is relative. In case of higher creatures on earth, we define pain as a stimuli signaling the brain of some damage to the body. Biologically, pain is not all that different from other stimuli, such as cold or heat or just tactile feedback. The main difference seems to be in that we, humans, most of the time, experience pain in a highly negative way. And that is the only point of reference we know, so when humans say that babyeater babies are dying in agony they are making some unwarranted assumptions about the way babies percieve the world. After all, they are structurally VERY different from humans.

Second is about the "help" humans are considering for babyeaters and superhappies are considering for both humans and babyeaters. Basically by changing the babyeaters to not eat babies or to eat unconscious babies, their culture, as it is, is being destroyed. Whatever the result, the resulting species are not babyeaters and babyeaters are therefore dead. So, however you want to put it, it is a genocide. Same goes for humans modified to never feel pain and eat hundreds of dumb children. Whatever those resulting creatures are, they are no longer human either biologically, psychologically or culturally and humans, as a race, are effectively dead.

The problem seems to be that humans are not willing to accept any solution that doesn't lead to the most efficient and speedy stoppage of baby eating. That is, any solution where babyeaters will continue to eat babies for any period of time is considered inferior to any solution where babyeaters will stop right away. And the only reason for this is because humans are feeling discomfort at the thought of what they perceive as suffering of babies. In that aspect humans are no better then superhappies, they would rather genocide the whole race then allow themselves to feel bad about that race's behavior. If humans (and hopefully superhappies) stop to be such prudes and allow other races rights to make their own mistakes, a sample solution might lie in making the best possible effort to teach babyeaters human language and human moral philosophy, so they might understand human view on the value of individual consciousness and human view on individual suffering and make their own decision to stop eating babies by whatever means they deem appropriate. Or argue that their way is superior for their race, but this time with full information.

... but relative to simply cooperating, it seems a clear win. Unless the superhappies have thought of it and planned a response.

Of course, the corollary for the real world would seem to be: those people who think that most people would not converge if "extrapolated" by Eliezer's CEV ought to exterminate other people who they disagree with on moral questions before the AI is strong enough to stop them, if Eliezer has not programmed the AI to do something to punish that sort of thing.

Hmm. That doesn't seem so intuitively nice. I wonder if it's just a quantitative difference between the scenarios (eg quantity of moral divergence), or a qualitative one (eg. the babykillers are bad enough to justifiably be killed in the first place).

Let's make a bit of summary.

Similarities: Each species considers suffering, in general, negative utility. Each species considers survival very high in utility. (Though at least some humans consider the possibility of sacrificing their species for the others' benefit, so this is not necessarily highest in value.) Each species has a kind of “fun” that's compatible with the others', and that's high in utility. They are all made of individuals, reproduce sexually, can communicate among themselves and at least somewhat compatibly with the others.

* crystal pogo-sticks:
- this appears to indicate that they have some equivalent of empathy for other species
- have other "compatible pleasures" with humans, e.g. living & eating, reproduction, and art;
- but consider suffering of winnowed children acceptable (indeed, good) because it is useful for the existence and evolution of their species (the main selective pressure); so the existence and evolution of their species is considered to have massive positive utility. The relationship appears hard-wired in their thinking processes due to natural evolution (because that's _how_ evolution worked for them).
- avoid their suffering, and that of other species'
- this is not conditioned on the other species' eating of their children: they tried to “help” humans adopt children-eating although humans don't already do it; therefore, they assign positive utility to other species' utility _independently_ of whether or not they eat their children. Also, they didn't instantly kill the humans, even though they could have had at the start.
- appear to be very good team players as a species, even hard-wired for that. In fact, this appears to be the _top_ of their value pyramid.

* noisy bipeds:
- enjoy various pleasures, like living & eating, reproduction, and art and humor;
- avoid their own suffering, and that of others (empathy); this is hard-wired into their brains, as a survival mechanism. But they consider low-level suffering (of children and adults) acceptable (indeed, good) because: it is useful for the existence of their species (learning to avoid things with unpleasant consequences); natural evolution hard-wired biped's brains to _like_ the results of suffering (this goes as far as valuing more something obtained effort-fully than the same thing obtained effortlessly); in the ancestral environment, many useful things could not be obtained without some suffering, so a complex system of trade-offs evolved in the brain.
- much of their team-playing is rational: they have instincts to cheat, and those are rationally countered if an unpleasant outcome is anticipated (though anticipation is also influenced by cooperative instincts; the rational part has at least some part in balancing them).

* happy tentacly lumps:
- avoid suffering; no explicit indication why, presumably evolved as in the other two species.
- have empathy; this might be evolved or engineered, not clear; but it's not an absolute value, if we trust their statement that they're willing to alter it if it causes them unavoidable suffering.
- don't seem to assign any value to suffering, however.
- like happiness a lot, but this doesn't seem to be the absolute core value: they've not short-circuited their pleasure centers. So there must be something higher: experiencing the Universe? Liking happiness was probably originally evolved (it's a mechanism of evolution), but might have been tampered with then.
- they seem rational team-players, too: it promises more future happiness rather than less future suffering.

* * *

I'm a bit less versed in the Prisoner's Dilemma than I suspect most here are, so I'll summarize what I understand. There's supposed to be, for each “player”, the best personal outcome (everyone else cooperates, you cheat), the worst personal outcome (you cooperate, everyone else cheats) and the global compromise (everyone cooperates, nobody gets the bad outcome). I suppose in more that two players there are all sorts of combinations (two ally and cooperate, but collectively cheat against the other); I'm not sure how relevant that is here, we'll see. In real situations there are also more than two options, even with just two players (like the ultimatum game, you may "cheat" more or less). There's also another difference between the game and reality: in real life you may not really know the utility of each outcome (either because you mis-anticipated the consequences of each option, or because you don't know what you really want; I'm not sure if these two mean the same thing or not).

Let's see the extreme options. “+” means what each species considers the best outcome and “-” means what it considers worst if each species defects (as far as I can tell).
* crystal pogo-sticks:
+ everyone starts having a hundred children and eating them just before puberty.
- they are forced to keep living and multiplying, but prevented from eating their children; they don't even _want_ to eat them, the horror!
- same as above, but they're also happy about it and everything else.

* noisy bipeds:
+ they keep living and evolving as they do now; the crystal pogo-sticks stop eating self-aware children and are happy about it; and the happy tentacly lumps keep being happy and help everyone else being as happy as they want; either they start liking “useful” suffering or they stop empathizing with suffering of people who do want it.
- everyone starts having a hundred children and eating them just before puberty.
- everyone stops suffering and tries to be as happy as possible, having sex all the time. The current definition of “humanity” no longer applies to anything in the observable Universe.

* happy tentacly lumps:
+ everyone stops suffering and tries to be as happy as possible, having sex all the time. Horrible things like the current “humanity” and “baby-eaters” no longer exist in the observable Universe :))
- everyone starts having a hundred children and eating them just before puberty.
- humans keep suffering as much as they want, and keep living and evolving as they do now; the crystal pogo-sticks stop eating self-aware children and are happy about it, but may keep as much suffering as the humans believe acceptable; and they themselves keep being happy, help everyone else being as happy as they want, and start liking “useful” suffering.

This doesn't mean necessarily that each outcome is actually possible. As far as I can tell from the story, only the happy tentacles can actually cheat that way. The worst that humans _can_ do from the tentacle's POV is start a Dispersion: run back and start jumping randomly between stars, destroying the first few stars after jumping. Depending on who they want to screw most, they may also destroy the meeting point, and/or send warning and/or advice to the crystal pogo-sticks. I think the pogo-sticks can do the same (it appears from the story that the star-destroying option is obvious, so they could start a Dispersion, too). This wouldn't prevent problems forever, but it would at least give time to the Dispersed to find other options.

The “compromise” proposed by the happy tentacly lumps doesn't seem much worse than their best option, though: the only difference I can see is that everyone starts eating unconscious children. (I don't see why they wouldn't try humor and more complex pleasures anyway: they haven't turned themselves into orgasmium, so they presumably want to experience pleasurable _things_, not pleasure itself.) I don't understand crystalline psychology well enough, but it seems pretty close to the worst-case scenario for them. And it's actually a bit worse than the worst-case tentacle-defecting scenario for the humans.

The tentacly lumps may think fast, but it seems to me that either they don't think much better, or they're conning everyone else. They're in quite a hurry to act, which is suspicious a bit:

OK, it's reasonable that they're concerned about the crystalline children. But they also know that the other species have trouble thinking as fast as them, and there's another option that I'm surprised nobody mentioned:

As long as everyone cooperates, everyone can just agree to _temporarily_ stop doing whatever the others find unacceptable, and use the time to find more solutions, or at any case understand the solution others propose. They may find each other “varelse” and start a war, but I see no reason for any species to do it _right_now_, even if they know they'd win it. (This assumes they all cooperate in the Prisoner's Dilemma as a matter of principle, of course.)

* * *

While the crystal pogo-sticks and the noisy bipeds won't much enjoy putting a temporary stop to having children (say, a year or a decade, even a century), I don't see why having the “happy tentacly compromise” _right_now_ would be higher in their preference ordering, since apparently nobody ages significantly. Even a _temporary_ stop to disliking not having children doesn't seem a problem (none of the three species seem inclined to reproduce unlimitedly, so they must have some sort of reproductive controls already beyond the natural ones). The happy tentacly lumps are carefully designed in the story to not have any unwanted attributes themselves except that they want and can transform the other species without their will. The humans (and myself) seem to consider their private habits, as far as they shared them, merely a bit boring relative to others, and the crystalline pogo-sticks seem to consider not eating children dis-likable but acceptable in other species, at least temporarily (since they didn't attack anyone). So the only compromise they'd have to do is _temporarily_ stop empathizing with small amounts of suffering (i.e., that of the other species not having children during the debate) and not forcibly convert them until afterwards.

As far as I can tell after a day of thinking, the result of the debate would include the crystalline pogo-sticks understanding that not eating children and cooperating are compatible in other species (they do have the concept of “mistaken hypothesis”, and they just got a lot more data than they had before; also they didn't instantly attack a species that never eats its children), and also accept some way of continuing their way of life without eating _conscious_ children. Depending on the reproduction (& death, if applicable) rates of each species, and their flexibility, it might even be technically possible to let them reproduce normally, but modify their children such that they don't suffer during the winnowing, and the eaten ones become a separate non-reproducing species voluntarily.

As for the humans, from my reading of the story I understand that the happy tentacly lumps mostly object to _involuntary_ human suffering, i.e. the children. They don't like the voluntary suffering, but it doesn't seem to me they'd force the issue on adults. So they should at least accept letting the existing adults decide if they want to keep their suffering, such as it is. I don't find it unacceptable a compromise where children get to grow up without any suffering they don't want, especially (but not necessarily) if the growing is engineered so that the final effect is essentially the same (i.e., they become as “normal” humans and accept suffering in “usual” circumstances, even if they didn't grow up with it). Of course, we're psychologically closer to the Confessor than to the rest of the humans in the story, so what we consider acceptable is as irrelevant as his to what decision they'd take.

The happy tentacly lumps might have simply anticipated all this, and decided on the best outcome they want. (In case they're really _really_ smart and practically managed to simulate the others species.) This would explain why they didn't propose the above, but would make the story moot. In that case the situation is somewhat analogue to an AI in the box, except that you can't destroy the box nor the AI inside, you can only decide to keep it there. My decision there would be to put as big a pile of locks as I can on the box, and hope the AI can't eventually get out by itself. The analogue of which would be Dispersion. (But the analogy is not an isomorphism: the AI is in an open box right now, and it doesn't seem to try to jump out, i.e. it didn't blow up the human ship yet, which is why the story is still interesting.)

Go back to Earth and detonate. It will mean the end of the civilization they know, but the Superhappys will still hunt the survivors down with 2^2^2^2 ships, and will force an equitable compromise for each surviving pocket of humanity, each of which will make the whole more human then they would be with just one compromise with humanity.

I just can't figure out who the Confessor will shoot, or if he will just threaten, to make it happen. And I want to read both endings.

If the supperhappies are more advanced than us, then shouldn't they know the true value of the strong nuclear force, and thus know that blowing up the star is an option?

The Superhappies' decision seems reasonable. I am not sure what alternative solution might be. Hrm.

Dmitry, concerning genocide, I believe you are anthropomorphizing a culture. "Babyeater culture" is not a person. Eliminating the culture is not a crime if performed by non-murderous means; consider an alternative "final solution" of using rational arguments and financial incentives to convince Jews to discard Judaism.

Perhaps the act of forcible biological modification to prevent criminal behavior is wrong (e.g. chemical castration for child molesters), but it isn't the same as a murder.

What is giving some people the impression that saying, "no," was an option? I mean, they could have turned down the compromise, but unless they had something to offer right then, that would have meant instant death (and then the compromise would be implemented anyway). "Yes" means the humans are not defecting right now, while it is (pointlessly) suicidal.

Chris, I don't think I am wrong in this. To give an analogy (and yes, I might be anthropomorphizing, but I still think I am right), if someone gives me a lobotomy, I, Dmitriy Kropivnitskiy, will no longer exist, so effectively it would be murder. If Jews are forced to give up Judaism and move out of Israel, there will no longer be Jews as we know them or as they perceive themselves, so effectively this would be genocide.

I am not certain I understand the terms of the puzzle. Should the audience come up with a better ending, a more plausible ending, or an ending which works better as story? And if we fail at this task, will we still get to know the other ending you had in mind?

Humanity could always offer to sacrifice itself. Compare the world where humanity compromises with both the Babyeaters and the Super Happy, versus one where we convince them to not compromise and instead make everybody Super Happy.

Of course, I'm just guessing, since I'm not a Utilitarian.

The Super Happies hate pain, and seeing others in pain causes them to experience pain. Humans tolerate pain better than the Super Happies do. This gives the humans a weapon to use against them, or at least negotiating leverage. They can threaten to hurt themselves unless the Super Happies give them a better deal.

(So, in order to unlock the True Ending, do we have to come up with a way for the humans to "win" and get what they want, alien utility functions be damned, or should we take the aliens' preferences into account too?)

(Long time lurker - first post)
The course I would suggest, if on the IPW, would be to rally the Human fleet to set up a redundant and tamper-resistant self-destruct system on the newly-discovered star - with a similar system set up at the Human colony one jump further back.

When the Super-Happys return, we would give them the option:
1. Altering their preferences to align with Human values, at least enough so that they would no longer consider changing Humans without their full consent.
2. Immediately detonating the star - so they would no longer be able to rescue the Baby-Eater's Babies.

Any other course of action, or attempting to tamper with the self-destruct would trigger the self-destruct (and perhaps that on the next Human Colony in case they prevented the first nova).

We would offer volunteers to join the Super-Happys, in order to explore the feasibility and desirability of further harmonization. (and also monitor their compliance with the agreement... and steal as much technology as possible).

I say this as an unabashed defender of the superiority of Human values, who is willing to use our native endowment of vicious craftiness to defend and promote those ideals.

Akon clearly lost his mind, so the Confessor should anesthetize him. He does not need to break his oath and take the command of the ship. Instead he can just point out some obvious things to the rest. Such as that it would be crazy to blackmail Supperhappies using a single ship with no communication to the rest of humanity. Or that interest rates need not fall through the floor the way Akon was trying to convince them, but instead would rise by the similar amount. Or what Cabalamat pointed out. I am only not sure what ending does this lead to.

This was a failed negotiation. The fact that the babyeaters rejected the superhappy proposal means it is not symmetric. It is not a compromise that fair babyeaters would propose if they were in the superior position.

That the superhappies proposed it and then ignored evidence that it was unacceptable, is evidence that the superhappies are not being as fair as Akon seemed to think they were. It is obvious that they are not sacrificing their value system as much as they are requiring the babyeaters to. They are pushing their own values on the babyeaters because they CAN, not because they are offering a balanced utility exchange. They are likely doing the same to us.

They view the babyeater situation as dire enough that they are willing to enact modifications without acceptance. They gave humankind a general proposal that they predicted humankind would accept. They COULD just make modifications, but part of their value system includes getting human acceptance.

I'm not sure, but I think the humans should threaten/ go to war with them, So they make no more modifications except those that they think they MUST make. That'll be my guess. Stun the captain, go to war.

I'm not sure what the babyeater's current stance says about how much they've considered the possibility that they will encounter superpowered babyeaters in the future.

Dmitry, if someone destroys your brain or alters it enough so that it is effectively the brain of a different person, that is indeed murder. Your future utility is lost, and this is bad. Forcing you to behave differently is not murder. It may be a crime (slavery) or it may not be (forcing you to not eat your children), but it is not murder.

Genocide (as I understand the term) is murder with the goal of eliminating an identifiable group. It is horrific because of the murder, not because the identifiable characteristics of the group disappear.

My understanding is that preventing babyeating will be done in such a way as to minimize harm done to adult babyeaters, and only if such harm is outweighed by the utility of saving babyeater children. It is vastly different than genocide; the goal is to prevent as much killing as possible, not eliminate the babyeating aliens.

Incidentally, my hypothetical "final solution" is actually a Pareto improvement: every Jew who converts does so because it increases his/her utility.

I would guess that the True Ending involves the Confessor stunning Akon. The aliens used every trick in the book to influence the humans. They communicated using real-time video instead of text transmissions. They gave speeches perfectly suited to tug on people's emotional levers. Since the Superhappies run at an accelerated rate, this also forced Akon to respond before he could fully process information.

I would almost say Akon's mind has been hacked. Akon had very little time to think before accepting the Superhappy terms and he currently seems resigned to the destruction of humanity. He uses "negotiations" to describe the Superhappy ultimatum. Anyway, he's probably not fit to lead the ship. The Pilot hasn't had a mental breakdown, he's just (understandably) outraged at what's going on. If the stunner is only used in the case of mental breakdown, the Pilot will have to be stopped by other means. Once a new leader is elected/promoted/whatever, the Confessor should require all real-time communication from the Superhappies to be text-only.

The Superhappies may be technologically superior, but their weakness is the fact that they don't separate genes from memories. They also don't withhold information from each other. This could allow a specially-crafted memory to disrupt or destroy the entire race. Even the kiritsugu are shocked by the slightest display of suffering, so it's not much of a stretch to say some images exist that would permanently traumatize all Superhappies.

Of course, destruction isn't the goal, modifying is. Before the Superhappies leave, the humans should ask to stay in contact with one Superhappy ship during Operation Babyeater. By studying them more, the humans could find a way to insert a memory that changes Superhappies to be less of a threat. If the humans have the upper hand, they can actually decide whether or not to adopt superhappiness instead of having the choice forced on them.

If it doesn't work, at least the humans will know how big the Superhappy armada is. They could wait for the Superhappies to return from Babyeater territory and blow up the system. The babies would be saved and humanity would be safe until the next nova.

Full cooperation is not one of the scenarios I outlined, since most humans would not want to become Superhappy. As the Confessor said, "You have judged. What else is there?"

Does anyone else have suspicions about the "several weeks" timeframe that the Lady 3rd has given for the transforming of the Babyeaters?
What can the Superhappies do in several weeks, regardless of their hyper-advanced and hyper-advancing technology? I suspect not much other than kill off most of the species. A quick genocide will decrease more suffering on the long run than an arduous peaceful solution.

Genocide seems even more likely since lady 3rd told Akon that his decision would be identical to other human decision makers.
The Babyeaters of the ship decided not to cooperate and they were destroyed. The rest of the decision makers of the Babyeaters will not cooperate and will have to be destroyed (in the mind of the Lady 3rd).

So at this point, the Confessor shocks the Administrator and they allow the Superhappies to go on with their genocide of the Babyeaters. Unavoidable and humanity would have done a very similar thing anyway. Then destroy the star and go back to Earth to prepare to meet the Superhappies again in a few decades or so (since their progress is a few orders of magnitude faster, humans can easily expect to see them again uncomfortably soon). Preparations would include eliminating suffering and such so that a new war would be avoided after the next meeting. Why on earth haven't they eliminated pain anyway? :)

I'm beginning to suspect this is a trick question. Well, sort of.

If the situation were reversed, how would you answer? If the technologically advanced Babyeaters had offered a one-sided "compromise" and then destroyed the primitive Superhappy ship when they refused?

The strong aliens have demonstrated their willingness to defect in a prisoner's dilemma type situation while the weak ones cooperated. That suggests we should cooperate with the weak ones and defect against the strong ones. I don't think the particulars of their moral systems should override that.

Prisoner's Dilemma has been prominent enough in the story that Akon's failure to appreciate the implications of the defection seems like a severe lapse of judgement. The Confessor stuns him and the remaining crew reconsiders the situation.

The Informations told/implied to the Humans that they don't lie or withold information. That is not the same as the Humans knowing that the Informations don't lie.

Eliezer's novella provides a vivid illustration of the danger of promoting what should have stayed an instrumental value to the the status of a terminal value. Eliezer likes to refer to this all-too-common mistake as losing purpose. I like to refer to it as adding a false terminal value.

For example, eating babies was a valid instrumental goal when the Babyeaters were at an early state of technological development. It is not IMHO evil to eat babies when the only alternative is chronic severe population pressure which will eventually either lead to your extinction or the disintegration of your agricultural civilization with a reversion to a more primitive existence in which technological advancement is slow, uncertain and easily reversed by things like natural disasters.

But then babyeating became an end in itself.

By clinging to the false terminal value of babyeating, the Babyeaters caused their own extinction even though at the time of their extinction they had an alternative means of preventing an explosion of their population (particularly, editing their own genome so that fewer babies are born: if they did not have the tech to do that, they could have asked the humans or the the Superhappies for it).

In the same way, the humans in the novella and the Superhappies are the victims of a false terminal value, which we might call "hedonic altruism": the goal of extinguishing suffering wherever it exists in the universe. Eliezer explains some of the reasons for the great instrumental value of becoming motivated by the suffering of others in Sympathetic Minds in the passage that starts with "Who is the most formidable, among the human kind?" Again, just because something has great instrumental value is no reason to promote it to a terminal value; when circumstances change, it may lose its instrumental value; and a terminal value once created tends to persist indefinitely because by definition there is no criterion by which to judge a system of terminal values.

I hope that human civilization will abandon the false terminal value of hedonic altruism before it spreads to the stars. I.e., I hope that the human dystopian future portrayed in the novella can be averted.

Geoff: "They also don't withhold information from each other. This could allow a specially-crafted memory to disrupt or destroy the entire race."

This is not Star Trek, my Lord.

Even the kiritsugu are shocked by the slightest display of suffering, so it's not much of a stretch to say some images exist that would permanently traumatize all Superhappies.
This is not going to work. The kiritsugu learned about the Babyeater culture without being impaired. Nothing humanity can reasonably come up with in the relevant timeframe will come close to that knowledge in shock-value.

Note that the kiritsugu as depicted through Cultural Translater versions 2 and 3 doesn't show any shock at humans being stressed; that depiction only appears in version 16. As such, it seems likely that this depiction is not based on the kiritsugu's actual emotional state, but rather added to better allow humans to communicate with ver.

Why should we care for some crystalline beasts? We don't desire to modify lions to eat vegetables

I do. Pain is painful in "beasts" too. What does it matter if they are made of crystals, are hairy or whatever?

Chris, continuing with my analogy, if instead of lobotomy, I was forced to undergo a procedure, that would make me a completely different person without any debilitating mental or physical side effects, I would still consider it murder. In case of Eliezer's story, we are not talking about enforcement of a rule or a bunch of rules, we are talking a permanent change of the whole species on biological, psychological and cultural level. And that, I think, can be safely considered genocide.

The humans, Babyeaters, and Superhappies were attracted by the nova. They were all eager to meet aliens. The Babyeaters and the Superhappies have the means to create supernovae artificially. They should be able to create ordinary novae too. This would be a good way to meet aliens. Why haven't they tried that?

Peter - I am, sadly, not an astrophysicist, but it seems reasonable that such an act would substantially decrease the negentropy available from that matter, which is important if you're a species of immortals thinking of the long haul.

Peter, being able to blow up a whole star (a process that is obviously going to involve some kind of positive feedback cycle) is not the same as being able to start novas. A nova is not a detonation of a star. A nova is the detonation of a shell of hydrogen that has accumulated from a companion and compressed on the surface of a degenerate star (white dwarf).

I had asked why the Babyeaters and Superhappies have not intentionally created novae. But now I think it's pretty likely that the Babyeaters actually caused the nova. The Babyeaters were in the system first, despite being the least technologically advanced race, and despite having made special preparations for the hostile environment (the mirror shielding). If they had come in response to the nova, they probably would have been the last to arrive.

We know an Alderson drive can cause a supernova. We should consider the possibility that the original nova wasn't just a coincidental rendezvous signal, but was intentionally created by the superhappys. Of course this assumes that Alderson drives are just as good for creating a nova as a supernova.

I missed Eli's reply before my most recent post. Although he hasn't said that the Babyeaters can't induce a nova, I'm lowering my probability that they did.

What if the Superhappys created the Babyeaters and the supernova? The baby eaters wouldn't really eat babies, they wouldn't even really exist. And seeing the baby eaters would make humans more apt to compromise when they shouldn't. http://en.wikipedia.org/wiki/Argument_to_moderation

So shoot the hypnotized Captain.

2. ... and anesthetized the entire crew, at which point he proceeded to have nonconsensual sex with every person aboard the ship. When in Rome...!

Z. M. Davis: OK, well it may not be self-replicating but it was worth a shot. Extreme empathy is basically the only weakness the Superhappies have. I'm not a big Star Trek fan, so I haven't seen the first two episodes you linked to and I only vaguely remember the last one.

51a1fc26f78b0296a69f53c615ab5a2f64ab1d1e: Or early versions of the translator failed to convey the humans' stress to the Superhappies. The kiritsugu are rather isolated from the rest of the crew, so while they have knowledge of the Babyeaters, maybe they haven't seen the videos. It would be analogous to reading about the Holocaust versus stepping into a holodeck depicting a concentration camp. Yes, I'm assuming aliens have a bias similar to humans. If that's not the case, then all non-kiritsugu Superhappies will be grief-stricken for quite some time after hearing about the Babyeaters. There would also have to be a very good reason why kiritsugu lack an emotion/reaction found in the rest of their kind. Humans without empathy are autistic or psychopaths. Again I'm arguing from a human analogy, but removing an emotion can completely change a being (http://www.overcomingbias.com/2009/01/boredom.html).

Anyway, most of my speculation is probably wrong, but the main point I tried to make in my previous post is that Akon's leadership is seriously compromised. The Superhappies are very manipulative and the Confessor needs to get a handle on things before saving humanity gets any tougher.

Did I mention a holodeck? Ugh, curse you Star Trek.

Another question:

Do the Super Happies already know where the human worlds are (from the Net dump), or are they planning on following the human ship back home?

As noted earlier, the Superhappies don't appear to be concerned about the presumed ability of the Babyeaters to make supernovas. Perhaps they have a way of countering the effect, and have already injected anti-supernova magicons through the starline network back to Earth and Babyeater Prime. In that case trying to detonate either immediately or at Huygens would fail, while eliminating any trust the Superhappies had in us. Maybe that's not much worse; they wouldn't punish us for the attempt, it might just make them more aggressive about fixing us.

Also, is the cosmology such that the general lack of visible supernovas is significant? It would seem that the normal development for "human-like" technological civilizations is that shortly after discovering the Alderson drive, a mad scientist or misguided experiment blows up the home star. Babyeaters and Superhappies apparently avoided this by having some form of a singleton, and humans got lucky because the scientists were able to suppress the information. Humans may be the most individualistic technological civilization in the universe.

I'm surprised the Super Happy People are willing to allow pre-sentient Baby Eaters to be eaten. Since they do not distinguish between DNA and synaptic activity, they might regard the process of growing a brain as a type of thought and that beings with growing brains are thus sentient.

It seems we are at a disadvantage relative to Eliezer in thinking of alternative endings, since he has a background notion of what things are possible and what aren't, and we have to guess from the story.

Things like:

How quickly can you go from star to star?
Does the greater advancement of the superhappies translate into higher travel speed, or is this constrained by physics?
Can information be sent from star to star without couriering it with a ship, and arrive in a reasonable time?
How long will the lines connected to the novaing star remain open?
Can information be left in the system in a way that it would likely be found by a human ship coming later?
Is it likely that there are multiple stars that connect the nova to one, two or all three alderson networks?

And also about behaviour:

Will the superhappies have the system they use to connect with the nova under guard?
How long will it be before the babyeaters send in another ship? the humans, if no information is received?
How soon will the superhappies send in their ships to begin modifying the babyeaters?

Here's another option with different ways to implement it depending on the situation (possibly already mentioned by others, if so, sorry):

Cut off the superhappy connection, leaving or sending info for other humans to discover, so they deal with the babyeaters at their leisure.
Go back to give info to humans at Huygens, then cut off the superhappy connection.
Go back to get reinforcements, then quickly destroy the babyeater civilization (suicidally if necessary) and the novaing star (immediately after the fleet goes from it to the babyeater star(s), if necessary).

In all cases, I assume the superhappies will be able to guess what happened in retrospect. If not, send them an explicit message if possible.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30