« ...And Say No More Of It | Main | Against Propaganda »

February 09, 2009

Comments

I preferred the superhappy ending, and in fact the story nudged me further in that direction. I guess I don't really get what the big deal about pain and suffering is, there's no physical pain on the internet and it seems to work just fine.

three worlds collide would make a decent movie...just have to make the reasoning of the characters more explicit for people unfamiliar with concepts involved.

Eliezer, one qualm: You consistently bring up mirror neurons and consider it to be obvious prima facie that they are used for action understanding in humans. Unfortunately, most contemporary neuroscientists in the field agree that there is no consistent evidence of this:

http://www.cognitionandculture.net/index.php?option=com_content&view=article&id=223%3Ado-we-have-mirror-neurons-at-all&catid=32%3Aoliviers-blog&Itemid=34

http://talkingbrains.blogspot.com/2008/08/eight-problems-for-mirror-neuron-theory.html

That is not to say that humans don't understand other people's actions or that we do not have adequate theory of minds! But it does mean that there is no reason to suspect that those complicated cognitive events can be reduced to simply a group of "mirror" neurons. Ramachandran often mentions them as well, which irks me slightly as well.

3WC would be a terrible movie. "There's too much dialogue and not enough sex and explosions", they would say, and they'd be right. And you shouldn't just tack them on, either; sex and explosions should flow out naturally as an indispensable part of the plot.

Andy, consider "mirror neurons" as shorthand for "empathic architecture" rather than implying that the whole thing gets done by a small group of actual neurons a la the "grandmother neuron".

Eliezer: if you show more of it from the perspective of the Superhappies, dialogue itself takes care of that problem.

"3WC would be a terrible movie. "There's too much dialogue and not enough sex and explosions", they would say, and they'd be right."

Hmmm.. Maybe we should put together a play version of 3WC; plays can't have sex and explosions in any real sense, and dialogue is a much larger driver.

And moreover, I had gone to considerable length to present the Superhappy argument in the best possible light.

Hmm. I felt that while the Superhappy argument was presented in the best possible light, the Superhappy ending wasn't. The non-Superhappy ending was in two parts and contained all kinds of cool things, like the use of emergency flags to manipulate the local prediction markets and more insights into the personalities of the different characters. The Superhappy ending, on the other hand, was just one part and was basically just a pretty dull overview of how everybody considered humanity's future to be horrible.

To me, it felt like a pretty blatant statement of "this is the future that we don't want".

Frankly, what I've kinda on and off wanted to see was someone turn Nick Bostrom's "Fable of the Dragon Tyrant" into a movie. That could, perhaps, actually work. Maybe.

I think that you make good points about how fiction can be part of a valid moral argument, perhaps even an indispensable part for those who haven't had some morally-relevant experience first-hand.

But I'm having a hard time seeing how your last story helped you in this way. Although I enjoyed the story very much, I don't think that your didactic purposes are well-served by it.

My first concern is that your story will actually serve as a counter-argument for rationality to many readers. Since I'm one of those who disagreed with the characters' choice to destroy Huygens, I'm pre-disposed to worry that your methods could be discredited by that conclusion. A reader who has not already been convinced that your methods are valid could take this as a reductio ad absurdum proof that they are invalid. I don't think that your methods inexorably imply your conclusion, but another reader might take your word for it, and one person's modus ponens is another's modus tollens. Of course, all methods of persuasion carry this risk. But it's especially risky when you are actively trying to make the "right answer" as difficult as possible to ascertain for dramatic purposes.

Another danger of fictional evidence is that it can obscure what exactly is the structure and conclusion of the argument. For example, why were we supposed to conclude that evading the Super-Happies was worth killing 15 billion at Huygens but was not worth destroying Earth and fragmenting the colonies? Or were we necessarily supposed to conclude that? Were you trying to persuade the reader that the Supper-Happies' modifications fell between those two choices? As far as I could tell, there was no argument in the story to support this. Nor did I see anything in your preceding "rigorous" posts to establish that being modified fell in this range It appeared to be a moral assertion for which no argument was given. Or perhaps it was just supposed to be a thought-provoking possibility, to which you didn't mean to commit yourself. You subsequent comments don't lead me to think that, though. This uncertainty about your intended conclusion would be less likely if you were relying on precise arguments.

Stories and movies are deeply different media. I'm surprised the transition works as often as it does.

I've recently become jaded on the use of fiction due to a nearly opposite line of thought: Fiction is really not important.

There are plenty of real problems and real drama.

Eliezer was not able to write about the experience of being a rationalist merely because he read about the experience of being a rationalist in some other work of fiction. Narrative might be required to convey experience, fiction is not.

It is the truths you are trying to convey, that is what matters. But to this we add invented societies, speculation about future technologies, or future technologies that we know can't exist. These may as well be wizards and sorcery, which is another fine genre for burying important lessons.

The complex and detailed universes inevitably lead to utterly pointless arguments.

Plus, the author will be drawn towards classic myth-forms to make the story better. (And the stories will rise and fall based on their appeal as stories, not the validity or true importance of their lessons.)

I recently watched all of the Star Wars movies with someone who had never seen them. She took Palpatine's description of the Jedi as actually power-seeking to be universe-accurate exposition. How many falsehoods will fiction bury the truth under?

"Without that ability to sympathize, we might think that it was perfectly all right* to keep slaves."

Nearly all people for thousands of years thought it was perfectly all right to keep slaves. Are you saying they didn't have the ability to sympathize? This is the sort of profoundly ahistorical "thinking" that irritates so many people. Someone who considers his own society's beliefs to be laws of reality when there is obvious historical evidence in the other direction that they never bothered to think about.

http://en.wikipedia.org/wiki/The_Iron_Dream

"The complex and detailed universes inevitably lead to utterly pointless arguments."

This is actually an argument **for** using fiction. Real situations are more complex and much, much more likely to result in peripheral arguments than fictional situations.

@billswift

"Nearly all people for thousands of years thought it was perfectly all right to keep slaves"

And many still do today - for example, Shari'a endorses slavery. Our Western values are far from universal and cannot be taken for granted.

Tom: "Hmmm.. Maybe we should put together a play version of 3WC [...]"

That reminds me! Did anyone ever get a copy of the script to Yudkowski Returns? We could put on a benefit performance for SIAI!

*ducks*

And perhaps the current slavers would change their minds if they read the right book, and perhaps not - more probably not, I think, without other changes as well. As I noted in the post text, the mirror neurons do have an off switch. It might take some abstract argument to turn them back on. Or it might take a "slave" rescuing their daughter in real life, instead of fiction. Maybe even that wouldn't do it. Maybe their and my reflective equilibria are so far apart that they can't be called by the same word "right".

Nonetheless - Uncle Tom's Cabin had an impact. Historically speaking.

It's easy to talk about how the Other is an alien monster who will just refuse to be persuaded by anything. I can't persuade the invincibly obstinate and despicable image of them that exists in your heads. Reality might be another story.

Tyrrell said:

Nor did I see anything in your preceding "rigorous" posts to establish that being modified fell in this range It appeared to be a moral assertion for which no argument was given.

Yeah. Eliezer, in your story, being modified just didn't seem bad enough to be obviously preferable to killing 15 billion people. This creates moral ambiguity that is great in a story, but not if you wanted to communicate a clear moral.

The way the story was presented, I was think "humanity without suffering, and having to eat non-sentient babies?... Is that really so bad to justify killing 15 billion people?" Now, as a reader of Overcoming Bias, I know that Value is Fragile, and that scaling up the human brain is a highly risky proposition that that the brain is not designed for. So, the end result of the Superhappy proposal would not be "Humans minus suffering, eating non-sentient babies." It would not be human at all.

Superhappies can't just surgically remove negative emotions and pain from our brains and leave everything else untouched. It would be likely the Superhappies would make a first pass and remove suffering, but the neurochemical changes would drive us all insane (happily insane). The Superhappies would then have to make another pass to stabilize our brains, which would involve messing with who-knows-what. But stabilize us towards what? The Superhappies can't know the "right way" to make a sane human brain which doesn't experience suffering, because no such thing exists. If the Superhappies were ever a loss of what to do, then would probably just alter us in the direction of their own values and psychology. The end result of the Superhappy's working on us would probably think like a Superhappy, except with some token human values.

Even if the Superhappies were able to strip away human pain without mishap, there could be negative unintended consequences. If you remove negative emotions, you would actually disinhibit a lot of antisocial human behavior due to the loss of shame and guilt. Then the Superhappies would have to remove any aggressive or antisocial impulses we have, resulting in even more changes, which would all lead to a risk of insanity, or other problems that require even more "fixes."

Any modification the Superhappies make is only going to lead to consequences which result in even more modifications, which have their own consequences. When does this stop? I think the answer is that it doesn't stop until the product is much more Superhappy than it is human. (If instead the Superhappies were to let the humans be in charge of modifying themselves, then a higher degree of continuity with past humanity might be preserved.)

So Eliezer, you and I know the potential pitfalls of modifying humans, but since the story doesn't show them, the Superhappy proposal looks overly attractive, and the humans who resist it look excessively close-minded and trigger-happy in killing 15 billion of their own kind in order to resist something that just doesn't seem as bad (in the context of the story). To truly complete the story to show what you want it to show, you could have a second part of the normal ending that shows exactly why the Superhappy proposal is so bad based on your writings about the riskiness of brain modification.

billswift:
But those peripheral argument will still be about things that in some sense matter, as opposed to say, midichlorians.

HughRistik:

Speaking as a new reader of Overcoming Bias myself--I think that the sort of people who read this blog are more likely to miss how dangerous the Superhappies are, because we've considered ways that human suffering could be reduced or eliminated while still letting humans develop properly. Then, when people who already have ideas about how to reduce suffering read that the Superhappies want to eliminate suffering, they assume that the Superhappies' plans are the same as their own. (I'm not sure if this is a previously discussed and named bias, but it sure ought to be.)

As far as I can tell, the Superhappies don't care about proper human development, and are not even curious as to what it is. They want us to be happy; being "good people" doesn't enter into it. I'd say the Superhappies are "paperclip maximizers" for happiness-- though their idea of happiness is more complicated than a paperclip, the same principle is at work.

I would have said that the Superhappy proposal to find a happy middle ground between their values and the Babyeaters' by having everyone eat thousands of nonsentient babies was a preposterous straw-man for moral relativists, if that proposal hadn't actually been even more preposterously defended in the comments. Even if it's morally neutral to eat thousands of nonsentient babies, doesn't it seem... well, kind of ridiculous?

Which leads me to a point about the subject of this post, one that I don't think has been brought up yet: sometimes, people understand something more easily and more completely if they can see an example of it. Which is easier, to explain to someone what a cracker is, or to just show them a cracker? It's not practical to build a paperclipper and show it to everyone -- and that's where fiction comes in.

Uncle Tom's Cabin is not a valid argument that slavery is wrong. "My mirror neurons make me sympathize with a person whose suffering is caused by Policy X" to "Policy X is immoral and must be stopped" is not a valid pattern of inference.

Consider a book about the life of a young girl who works in a sweatshop. She's plucked out of a carefree childhood, tyrannized and abused by greedy bosses, and eventually dies of work-related injuries incurred because it wasn't cost-effective to prevent them. I'm sure this book exists, though I haven't personally come across it. And I'm sure this book would provide just as emotionally compelling an argument for banning sweatshops as Uncle Tom's Cabin did for banning slavery.

But the sweatshop issue is a whole lot more complex than that, right? And the arguments in favor of sweatshops are more difficult to put into novel form, or less popular among the people who write novels, or simply not mentioned in that particular book, or all three.

The problem with fiction as evidence is that it's like the guy who say "It was negative thirty degrees last night, worst snowstorm in fifty years, so how come them liberals are still talking about 'global warming'?". It cuts off a tiny slice of the universe and invites you to use it to judge the entire system.

But I agree that fiction is not solely a tool of the dark side. Eliezer's comment about it activating the Near mode thinking struck me as the most specifically useful sentence in the entire post, and I would like to see more on that. I would also add one other benefit: fiction drags you into the author's mindset for a while against your will. You cannot read the book about the poor girl in the sweatshops without - at least a little - cheering on the labor unions and hating the greedy bosses, and this is true no matter how good a capitalist you may be in real life. It confuses whatever part of you is usually building a protective shell of biases around your opinion, and gets you comfortable with living on the opposite side of the argument. If the other side of the argument is a more stable attractor, you might even stay there.

...that wasn't a very formal explanation, but it's the best way I can put it right now.

Not enough sex and explosions in 3WC? Are you joking?

Oh, and it would be easier to find someone make it into a good visual novel rather than a good movie.

Yvian, I warned against granting near-thought virtues to fictional detail here. I doubt Uncle Tom's cabin would have persuaded many slave holders against slavery; I expect well-written well-recommended anti-slavery fiction more served to signal to readers where fashionable opinion was moving.

@Yvain:
Mathematical proof is a valid argument, even if it doesn't contain any information, what was true remains so.

Fiction isn't supposed to act as evidence, it's supposed to place you in a specific focus of attention, where you resolve your own questions for yourself, from evidence you already hold. It doesn't explicitly state abstractions which you are supposed to learn in order to master new thoughts, reinterpret old data, or bind existing morals. It invites you to invent abstractions on a given topic for yourself.

Of course, all the usual biases will haunt you no less than in real life, plus the bias to interpret fiction as literal evidence, and they can be exploited to derail you just as it happens in the real life, only with more control. Although, control in fiction is still like programming: omnipotence without omniscience, where ability to manipulate the story doesn't always come with a way to efficaciously bias the reader.

Just on the off-chance, are there any OB readers who could get a good movie made?

If machinima counts as "a good movie" you might want to talk to Hugh Hancock (I've no idea if he reads OB, but based on his other interests he may well do).

Eliezer, as I indicate in my new post, the issue isn't so much whether you the author judge that some fiction would help inform readers about morals, but whether typical readers can reasonably trust your judgment in such things, relative to the average propaganda content of authors writing apparently similar moral-quandary stories.

I know that Tucker Max, whose movie I Hope They Serve Beer in Hell will be out sometime this year, has read this site since July of 2007 at least. He's actually how I discovered Overcoming Bias.

He's said numerous times that Eliezer would be absolutely fantastic if his posts weren't so ridiculously long and wandering at times.

He'd be a good person to talk to about making a movie (since his own was designed specifically to avoid that "dumbing-down process") and is probably going to make several tens of millions over the next few years.

Why should we believe there are "moral truths"? And why are the rules so different with regard to physics? What other topics have a standard more like morality than physics?

I agree with Yvain. The mirror neuron argument was just shoddy. After acknowledging that the science didn't necessarily support your point about them, you then said that doesn't matter. If the truth of an argument is irrelevant, why bring it up at all? Doesn't such an argument falling back on "deeper truth" have the same weaknesses as the religious/mystical in their attempts to avoid falsification?

This is an idea that I think is plausible, although it might be false: Uncle Tom's Cabin was more an epiphenomenon in the demise of slavery than a cause. It is an easy focal point to think of, and so we associate the end of slavery with it. If the book had failed (perhaps through having a lousy publisher or distribution) we would instead point to something else whose fame has been displaced in our own history by Stowe's novel.

"But those peripheral argument will still be about things that in some sense matter, as opposed to say, midichlorians."

Of course they are. That's why they distract so strongly from the central point.

Why would anyone argue about midi-chlorians? The "explanation" of Jedi powers in that movie detracted from the Star Wars universe.

Of course they are. That's why they distract so strongly from the central point.

Even if they distract more from the central point, theya re still real. They still have some potential relevance to the reader.

I think it would be a disservice to train someone up in the arts of rationality only to have most of their thoughts revolve around the facts of some fantasy universe.

Also, just because our universe _is_ more detailed than fictional ones, doesn't mean that all of that detail has to be available to the reader. We can offer simplified descriptions of situations.

Eliezer: It may be worth noting that SIAI just hired a new president FROM a branch of the film industry who has some familiarity with the sort of tax laws that can make indie movies a good investment even when expected value appears negative, and that SIAI's largest donor is the producer of an excellent movie about the marketing of cigarettes.

Other than that.

I agree with Kaj
I really like Hugh's point
I don't think 3WC or Dragon Tyrant work as movies. I don't know what Eliezer's got however WRT stories.

Tree Frog: do you know Tucker and are you suggesting that I speak with him? That's basically my job after all.

I still don't see the actual practical benefit of suffering. I've lived a very sheltered life, physically and emotionally. I've never needed stitches, never had my heart broken, I've always been pretty low-key emotionally, and I don't feel like I'm missing anything.

Besides, what are we going to do NEXT time we run into a more advanced race of aliens? I suppose we can just keep blowing up starlines, but what happens if they get the jump on us, like the superhappies got the jump on the babyeaters? It seems like we need powerful allies much more than we need our precious aches and pains.

Eliezer,

I am very much inclined to analyze your articles because you are indeed very enthusiastic about your theories, which is a rarity these days. On the link there’s a Wordle tag cloud picture of your article.

As you can see, there’s a lot of “argument(s)”,”moral”, “abstract”, “fiction’, and a somewhat humble “experience”.

To the point – fiction in the literary domain is often a method for implying moral concepts. But fiction is, above all, an invitation to imagine. There is a catch. We can imagine a setting, a world, a relationship. But we sometimes cannot imagine that this setting, or world, or relationship, is morally justified, simply because we have an elaborate moral hierarchy to begin with.

Another aspect is that fiction is a theory we know is false, but useful. Beware, Yudkowski – fiction is typically useful when it relates to things that exist or could exist, but only if we are able to observe them one day.

Hence a problem with science fiction – it typically uses big time spans to make reading them books interesting. In the end, as we very well know, science fiction is either too far sighted or too short sighted, but not really useful.

So what the hell am I talking about? Eliezer, in your work there are many useful ideas. “Three Worlds Collide” is more like entertainment.

@Robin: Thank you. Somehow I missed that post, and it was exactly what I was looking for.

@Vladimir Nesov: I agree with everything you said except for your statement that fiction is a valid argument, and your supporting analogy to mathematical proof.

Maybe the problem is the two different meanings of "valid argument". First, the formal meaning where a valid argument is one in which premises are arranged correctly to prove a conclusion eg mathematical proofs and Aristotelian syllogisms. Well-crafted policy arguments, cost-benefit analyses, and statistical arguments linked to empirical studies probably also unpack into this category.

And then the colloquial meaning in which "valid argument" just means the same as "good point", eg "Senator Brown was implicated in a scandal" is a "valid argument" against voting for Senator Brown. You can't make a decision based on that fact alone, but you can include it in a broader decision-making process.

The problem with the second definition is that it makes "Slavery increases cotton production" a valid argument for slavery, which invites confusion. I'd rather say that the statement about cotton production is a "good point" (even better: "truthful point") and then call the cost-benefit analysis where you eventually decide "increased cotton production isn't worth the suffering, and therefore slavery is wrong" a "valid argument".

I can't really tell from the original post in which way Eliezer is using "valid argument". I assumed the first way, because he uses the phrase "valid form of argument" a few times. But re-reading the post, maybe I was premature. But here's my opinion:

Fiction isn't the first type of valid argument because there are no stated premises, no stated conclusion, and no formal structure. Or, to put it another way, on what grounds could you claim that a work of fiction was an invalid argument?

Fiction can convincingly express the second type of valid argument (good point), and this is how I think of Uncle Tom's Cabin. "Slavery is bad because slaves suffer" is a good point against slavery, and Uncle Tom's Cabin is just a very emotionally intense way of making this point that is more useful than simple assertion would be for all the reasons previously mentioned.

My complaint in my original post is that fiction tends to focus the mind on a single good point with such emotional intensity that it can completely skew the rest of the cost-benefit analysis. For example, the hypothetical sweatshop book completely focuses the mind on the good point that people can suffer terribly while working in a sweatshop. Anyone who reads the sweatshop book is in danger of having this one point become so salient that it makes a "valid argument" of the first, more formal type much more difficult.

Well, my point is that fiction isn't argument at all, it's a theme for musing on your own questions. It's sometimes useful to read even something you know to be wrong, on a theme interesting for you, written by a thoughtful author. You don't expect to move towards agreement, you know the stuff is wrong, but you can light sparks of your own insight off its pages. When given a mathematical proof, its correctness is for you to appraise. Fiction gives you your own thoughts, take them or leave them.

Michael Vassar: I have no idea who you are, but I'll proceed on the assumption that the "job" you mention is one of representing Eliezer and/or the other OvercomingBias authors in some sort of business capacity.

I don't know Max on a personal level. We've talked a few times on his board and he might be dimly aware of my existence, but I make no claims as to what he will do if contacted by Eliezer or an agent of Eliezer's.

Serious discussions of potential OvercomingBias projects/movies and whatnot should be sent to Max's assistant, Ian Claudius - ian.claudius(AT)gmail.com. The Rudius people are smart and good content creators (multiple book contracts, one soon to be hit movie and stuff I actually like).

Whatever the potential abuses, I think fiction has a valid role to play in dealing with philosophical questions.

One example that comes to mind: I've observed in several discussions on the problem of evil that theists tend to want to discuss the matter in the most abstract possible terms. It seems to be easier to swallow a theodicy when you don't have the unpleasant facts of extreme suffering vividly in mind.

In the case of the POE, fiction can serve to cut through rationalizations in a way argumentation alone can never hope to do.

Eliezer,

Took a bit after reading your babyeater pieces to get my thoughts in order, but my general picture is that you're mis-representing the human condition, and that the whole of the story relies upon that misrepresentation. This is humanity you're talking about. While the readership of OB may match your profile decently, the human species, even extended into a moderately improved state does not hold the value-set you represent.

1. Child-love/protection is (a) proximity-focused, (b) stronger than abstract value of reciprocity. Folks are much better at getting over the idea that other folks are suffering than you give credit for. Hence the trade-off between superhappies modifying humans and humans saving baby-eater kids is a no-brainer. Nuke the local nova. Or better, point at a fundamental incompatibility between Baby-eater and Human ideology...that the SH plan misunderstands.

2. Us vs. Them is under-done as well. Ender's choice...where someone protects human species, even at the cost of the other species, and then is later vilified by self and others is much more likely. Nuke the local nova.

3. As far as I can tell, the story says starlines are dense and unpredictable, not that there's no other path. So...nuking Huygens star is too big a risk/cost for the uncertainty that baby-eaters will find another path. Nuke local, or don't bother.

Overall, I enjoyed it.

I think I have a co-operation instinct that is pushing me towards the supper happy future.

It feels better, but is probably not what I would do In real life. or I am more different then others then I give credit for.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31