The New York Times reports on a particularly interesting bias, moral hypocrisy. Unfortunately I could not get access to any of the primary reports, but generally this refers to judging your own actions as moral when you would see them as immoral for someone else. The experiment blatantly shows the effect:
You show up for an experiment and are told that you and a person arriving later will each have to do a different task on a computer. One job involves a fairly easy hunt through photos that will take just 10 minutes. The other task is a more tedious exercise in mental geometry that takes 45 minutes.You get to decide how to divvy up the chores: either let a computer assign the tasks randomly, or make the assignments yourself. Either way, the other person will not know you had anything to do with the assignments.
Now, what is the fair way to divvy up the chores?
When the researchers posed this question in the abstract to people who were not involved in the tasks, everyone gave the same answer: It would be unfair to give yourself the easy job.
But when the researchers actually put another group of people in this situation, more than three-quarters of them took the easy job. Then, under subsequent questioning, they gave themselves high marks for acting fairly. The researchers call this moral hypocrisy because the people were absolving themselves of violating a widely held standard of fairness (even though they themselves hadn’t explicitly endorsed that standard beforehand).
I must admit that I too would probably assign myself the easy task. However I would hope that I would not be so hypocritical as to claim that I had behaved fairly when I did so. But of course, reading about the experiment is different from being part of it. Maybe I would have been just as hypocritical as the other subjects.
For me, the most interesting finding was that a simple intervention could eliminate the hypocrisy (although not the unfair action!):
[The researchers] brought more people into the lab and watched them selfishly assign themselves the easy task. Then, at the start of the subsequent questioning, some of these people were asked to memorize a list of numbers and retain it in their heads as they answered questions about the experiment and their actions.That little bit of extra mental exertion was enough to eliminate hypocrisy. These people judged their own actions just as harshly as others did. Their brains were apparently too busy to rationalize their selfishness, so they fell back on their intuitive feelings about fairness.
This is an intriguing and (to me) counter-intuitive means for preventing at least one pervasive bias. It seems surprising because we normally think of our biases as being subconscious, in opposition to our conscious goals of clear thinking. In that sense, we might expect that distracting our conscious minds by memorizing numbers would actually increase the opportunity for bias to creep in. Yet in this case, we see the opposite. Apparently our subconscious evaluations of morality are more accurate and it is our conscious, volitional efforts which produce distortion.
Eliezer Yudkowski reported earlier on studies that showed the opposite effect, where distraction made people less accurate. It raises the question of what effect distraction would have on other forms of bias we have considered. I wonder if there is a rule of thumb for when one might productively use mental distraction in order to improve decision making?
A great question.
Posted by: Robin Hanson | July 01, 2008 at 03:11 PM
Could it be that the bias is actually on the part of those who were judging the experiment but not participating in it?
My first thought was that it's not morally wrong since you're participating in an experiment where you do not know how long it will take or how difficult it will be. Therefore choosing the easy option doesn't cause the late comer any distress and is not immoral. Now, when you describe the experiment to people and ask them if it's immoral, their perspective might bias them to consider it immoral because they might attribute the global knowledge they have to the late comer who they think is being harmed. When you're participating in the experiment, however, the correct rationalization might come easier to hand (unless you're distracted).
Posted by: poke | July 01, 2008 at 03:25 PM
My sense on this is that in this case people are favoring themselves over others but not in a negative sum manner so they are acting ethically (by practically any standard I know of, though least clearly that of Smith's neutral observer) but not fairly. Most people lack the mental vocabulary to make such distinctions however, and those who do make such distinctions rationally don't expect the experimenters to do so, but when not distracted they are vaguely aware of something like this distinction, at least, they are so aware when their self interest motivates them to put more than typical effort into the evaluation. They were legitimately unwilling to judge their behavior or that of their teammates as unethical but not inclined/able to conveniently express that it was unfair but not wrong so they glossed over the details and claimed fairness. Basically this sort of problem will come up whenever affect heavy words that have meanings distinct from the word's affect are used.
Posted by: michael vassar | July 01, 2008 at 03:45 PM
My interpretation of the results is that when the individuals have the mental ability to self-rationalize, they probably will, and they are probably somewhat aware that they are doing it as well. The key point about the intervention in my mind is that they were asked to *retain* the list of numbers, implying that they were constantly pre-occupied with keeping the list of numbers in working memory in order not to forget them. Under these circumstances, most people are simply not capable of coming up with an explanation for why their choice was not unfair while they are simultaneously keeping in working memory a list of numbers. They realize that they will forget the numbers if they stop thinking about them for more than a split second, and they are not comfortable saying "it was fair" without having a rationalization in mind.
I'd predict that if they actually had a real stake in whether they answered "it was fair" or "it was unfair" that they would consciously choose to dump the memory task and focus on rationalizing and they would be somewhat aware that they were doing that. I'd also predict that any intervention that does not tax the executive functions would not show the intervention effect.
Posted by: Joseph Knecht | July 01, 2008 at 04:02 PM
Oops, I hadn't seen the article where the interpretation I gave is given as their explanation too. It is a completely expected and unremarkable result to me, but still worthwhile to have done, of course. Is anybody *really* surprised by this result?
As for a rule of thumb for when to distract yourself, I don't think that is necessary or beneficial if rationalization is a fairly conscious activity. In that case, you can choose to do it or not, and if you can choose to distract yourself to prevent yourself rationalizing something that you have a vested interest in, it would be even easier just not to rationalize in the first place. For complex judgments, some serious thought might be required, and so distraction wouldn't work since it would prevent coming to a judgment. If it turns out not to be so conscious a process, one could try the heuristic that a simple moral judgment that one has any personal interest in at all calls for some distraction (assuming one prefers not to let the rationalization happen).
Posted by: Joseph Knecht | July 01, 2008 at 04:55 PM
If System 2 intelligence is being used to defeat itself, as in rationalization, and System 1 tends to produce good answers on its own, then distracting deliberation will yield better results. If intelligence is being used productively, trying to memorize a string of numbers at the same time will make you do worse. That's the first general rule that comes to mind.
Posted by: Eliezer Yudkowsky | July 01, 2008 at 07:10 PM
I think I like Michael Vassar's claim. I can see how you would claim that the participant is acting 'unfairly,' but only insofar as I don't think 'unfair' is actually a bad thing. I certainly don't think that assigning yourself the easy task is doing anything wrong--I do things like that all the time.
Posted by: Jadagul | July 01, 2008 at 07:52 PM
Jadagul and Michael Vassar raise a very interesting point. Our intuitions about fairness as a virtue are typically justified from a consequentialist perspective, but this seems to be a case where the "unfair", self-interested action was not inherently unethical. However this is a very contrived argument, and disregarding intuitive moral rules is nevertheless problematic.
My hunch is that Eliezer is right and this result will hold up even if replicated in a less ambiguous situation.
Posted by: anonymous | July 01, 2008 at 09:08 PM
BTW, this is easily the most fascinating bias I've heard about this whole month.
Er, make that, "over the last 30 days".
Posted by: Eliezer Yudkowsky | July 01, 2008 at 10:09 PM
It seems to me that the natural follow-up question is: can you extract both system 1 & system 2 answers, or do you memoize which ever comes out first? System 1 probably relies on memoizing more than system 2, so going to 2 first probably won't work, but if they asked the subjects again, after letting them flush the list, would they stick with the claim of unfairness?
Posted by: Douglas Knight | July 02, 2008 at 01:55 AM
What is the moral weight of taking the activity you prefer vs taking the easy one?
Personally, I think I'd vastly prefer a mental geometry exercise to a hunt through pictures, so would it be fair of me to claim my prefered activity for myself? Perhaps it is moral to sacrifice my prefered activity, but without knowing the other party's preference this is likely to be useless self-flagellation.
What about the fairness of the "first come first served" rule of thumb? Is it immoral to punish latecomers?
There is probably a lot more entering the subject's judgement, on various levels, than simply making themselves appear fair.
Posted by: Nanani | July 02, 2008 at 02:07 AM
My thought is that traditional morality always assumes an action is forbidden in the hypothetical, but frequently engages in whatever serves personal interests, justifying it in situational terms.
Posted by: michael vassar | July 02, 2008 at 03:29 AM
But did they remember to invert the results for mathematicians?
:-)
Posted by: Shane Legg | July 02, 2008 at 07:02 AM
There is an advantage in appearing better than you are and even in believing that you are better than you are. In some cases, it translates as overconfidence, in others as subconscious hypocrisy. The trick is that "better" in appearance and belief refers not to what is good for you, but to what is good for those who observe your behavior. Thus, you deceive yourself into thinking that you are more helpful to others (or your "tribe") than you really are. If you are generally helping people, you think that you are helping them more than you do, and if you secretly steal their resources, you think that you hurt them less than you do. Biases are divergences of reasoning from facts, but they are usually expressed in the behavior, which is not just about facts but also goals. Distortion of the reasoning process with another task makes boasting adaptation less able to influence people's decisions.
I expect that if people who receive a harder task as a result of actions of our participants were presented as from "another tribe" or "enemies", the bias would disappear. In fact, a slightly opposite bias may appear.
Posted by: Vladimir Nesov | July 02, 2008 at 08:05 AM
Anyone looked at the actual study more closely? More to the point, anyone have any idea if one can use this effect on themselves to help debias?
ie, if I suspect I may be rationalizing something, and I'm having trouble directly driving right through my biases, would trying to distract myself be a useful technique for improving my judgement?
Posted by: Psy-Kosh | July 02, 2008 at 03:18 PM
Perhaps at least some of the people being experimented on were not initially thinking it through whether they were being fair or unfair, rational or unrational, but rather had a thought upmost in their minds when deciding which task to do: "why didn't the experimenter make the choice for me?" Irritation with the experimenter may make people feel defensive and selfish. Even if they thought it unfair that had to make the choice of what task to do, they can still evaluate whether their choice was fairest, (at least in retrospect) and sometimes tell the truth about that self-evaluation.
Also, do people, at the time they are making a choice, know that if they choose the easy task are acting unfairly towards later arrivals, or is this a judgement they can make only in retrospect? In the real world, people learn from their moral choices, and may consciously act differently when a similar situation presents itself.
Posted by: crf | July 02, 2008 at 08:33 PM
The neutral observers are not neutral. They are trying to manipulate the questioner into behaving "fairly".
Posted by: Alan Crowe | July 03, 2008 at 02:39 PM