« Medicine As Scandal | Main | RAND Experiment II Petition »

May 13, 2007

Comments

The same idea goes for insisting that the charity you donate to is actually good at its mission. If you get your warm glow from the image of yourself as a good person, and if your dollars follow your glow, then competition among charitable organizations will take the form of trying to get good at triggering that self-image. If you get your glow from results, and if your dollars follow that, then charities will have much better incentives.

Your conclusion matches your data, but the data is suspiciously focused on charity. Is scope neglect easier to elicit in such contexts? Other explanations include it being hard to make large numbers relevant, and lack of imagination by researchers.

Douglas, I understand that scope insensitivity decreases substantially, but does not go away entirely, when personal profits are at stake.

It's not easy to devise experiments that distinguish unambiguously between the explanations that center around prototype-dominated affect, versus the warm glow of moral satisfaction. It seems pretty likely that both effects are at work.

How do people react if told "Here is a fixed amount of cash, that must go to charity. How do you wish it to be spent?"

Might that not distinguish "purchase of moral satisfaction" from "scope neglect"?

I strongly favor the "warm glow" explanation, but I'd take it a step further.

For most people, the warm glow is only worth it if they get social credit.

Those yellow LiveStrong bracelets are a great example. They're about $1 or so, and purchasers wear them around all day advertising that they care about cancer. How many of those people would have donated an equivalent amount (just a buck) without the badge of caring they get to wear around?

Those yellow LiveStrong bracelets are a great example. They're about $1 or so, and purchasers wear them around all day advertising that they care about cancer. How many of those people would have donated an equivalent amount (just a buck) without the badge of caring they get to wear around?

Actually, in my experience it's the other way round - people feel they're doing their bit just by wearing the bracelets, so they'll pay less for a bracelet than they'd donate anonymously.

But like most anecdotes, that one story doesn't tell you anything - we need statistics if we want to truly know how people behave.

I'm not sure I buy that this is completely about scope insensitivity rather than marginal utility and people thinking in terms of their fair share of a kantian solution. Or put differently, I think the scope insensitivity is partly inherent in the question, rather than a bias of the people answering.

Let's say I'd be willing to spend $100 to save 10 swans from gruesome deaths. How much should I, *personally*, be willing to spend to save 100 swans from the same fate? $1000? $10,000 for 1,000 swans? What about 100,000 swans -- $1,000,000?

But I don't *have* $1,000,000, so I can't agree to spend that much, even if I believe that it is somehow intrinsically worth that much. When I'm looking at what I personally spend, I'm comparing my ideas about the value of saving swans to the personal utility I give up by spending that money. $100 is a night out. $1000 is a piece of furniture or a small vacation. $10,000 is a car or a year's rent. $100,000 is a big chunk of my net worth and a sizable percentage of what I consider FU money. As I go up the scale my pain increases non-linearly, and my personal pain is what I'm measuring here.

So considering a massive problem like saving 2 million swans, I might take the Kantian approach. If say, 10% of people were willing to put $50 toward it, that seems like it would be enough money, so I'll put $50 toward it figuring that I'd rather live in a world where people are willing to do that than not.

Like many interpretations of studies like this, I think you're pulling to trigger on an irrationality explanation too fast. I believe that what people are thinking here is much more complicated than you're giving them credit for and with an appropriate model their responses might not appear to be innumerate.

It's a hard question to ask in a way that scales appropriately, because money only has value based on scarcity, so you can't say "If you are emperor of a region with unlimited money to spend, what it is worth to save N swans?" because the answer is just "as much as it takes". Money only has value if it is scarce, and what you're really interested in is "Using 2007 US dollars as units: How much other consumption should be foregone to save N swans?". But people can only judge that *accurately* from their own limited perspective where they have only so much consumption capacity to go around.

I perceive that I've neglected to convey the existence of a gigantic body of supporting evidence.

Michael Sullivan, see e.g. http://www.sas.upenn.edu/~baron/cv1.htm:

Embedding. Kahneman and Knetsch (1992) asked some subjects their WTP for improved disaster preparedness and other subjects their WTP for improved rescue equipment and personnel. The improved equipment and personnel were thus ``embedded in'' the improved disaster preparedness, so the preparedness included the equipment and personnel, and other things too. WTP was, however, about the same for the larger good and for the smaller good included in it. Kahneman and Knetsch called this the ``perfect embedding effect,'' presumably because a demonstration of it requires perfect equality of WTP of the two goods. When subjects were asked their WTP for the smaller good after they had just been asked about the larger one, they gave much smaller values for the smaller good than for the larger one, and much smaller values than those given by subjects who were asked just about the smaller good. This order effect is called the ``regular embedding effect.'' It demonstrates that a good seen as embedded in a larger good has reduced value. Kemp & Maxwell (1993) replicated this regular embedding effect, starting with a broad spectrum of public goods, and narrowing the good down in several steps, obtaining WTPs for an embedded good that were 1/300 of WTP for the same good in isolation.

Adding up. In a related demonstration, Diamond et al. (1993), asked subjects their WTP values for preventing timber harvesting in federally protected wilderness areas. WTP for the prohibition in three areas was not much (if any) higher than WTP for prohibition in one of the areas alone. This result cannot be explained by assuming that subjects thought that protection of one area was sufficient: when they were asked their WTP to protect one area assuming that another was already protected, or a third area assuming that two were protected, their WTP values were just as high as those for protecting the first area. More generally, in this kind of ``adding-up effect,'' respondents are asked their WTP for good A (e.g., a single wilderness area), for good B assuming that good A has been provided already, and for goods A and B together. WTP for A and B together is much lower than the sum of the WTP for A and for B (with A provided). Schulze, McClelland, & Lazo (1994) found similar results in a within-subject design: each subject rated A, B, and A and B together.

how many lives an action saves is less important than the emotional connotations of the act which takes the lives.
take micronutrient dispersal programs vs terrorism. malnutrition kills orders of magnitude more people and yet far more money is spent on terrorist prevention (well, mostly terrorist prevention signaling, but that's another topic). This is because fighting terrorism is more exciting than fighting scurvy. The order of magnitude difference in impact is ignored when evaluating which thing to spend money on. This makes choosing terrorism easier as saving 100 people from terrorism is much better for public relations than saving 100 random kids with goiters.

I came across an interesting book that includes the topic of scope insensitivity: "Determining the value of non-marketed goods: economics, psychological, and policy relevant aspects of contingent valuation methods" by Raymond J. Kopp, Werner W. Pommerehne, Norbert Schwarz. They suggest that while scope insensitivity on surveys is possible, it is not inevitable.

After providing an impressive list of studies rejecting the insensitivity hypothesis, they highlight two in particular: "First, the scope insensitivity hypothesis is strongly rejected (p<.001) by two large recent in-person contingent valuation studies, Carson, Wilks and Imber (1994) and Carson et al. (1994), which used extensive visual aids and very clean experimental designs to value goods thought to have substantial passive use considerations."

In order to prevent scope insensitivity, they suggest that the "respondent must (i) clearly understand the characteristics of the good they are asked to value, (ii) find the CV scenario elements related to the good's provision plausible, and (iii) answer the CV questions in a deliberate and meaningful manner."

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31