« Inseparably Right; or, Joy in the Merely Good | Main | Sorting Pebbles Into Correct Heaps »

August 09, 2008


Paul: For example, suppose that I think there's a 70% change that P is true, but that my believing P is true will cause one puppy to die. Is the death of the puppy worth 20% + epsilon chance of truth, so that I should change my beliefs? How about two puppies? What if someone offers me one dollar? How about a million dollars? What's the function to convert badness or goodness of consequence into weight of evidence?

In cases like this, the fundamental model of an agent interacting with reality has broken down; an agent is supposed to have a mind with goals, and sensors and effectors. In order to be considered as such an agent, one's interaction with the environment has to factor through one's sensors and effectors, i.e. your thoughts mustn't affect reality except by affecting what your body does. If this condition fails, then it can become impossible to act rationally.

In pascal's case, the notion of your thoughts being directly observable to God breaks the model. Something slightly different is wrong with Allen, and I would best describe it as a mental illness.

Our ability to act rationally can be compromised if the privacy of our own thoughts is compromised, for example, I might hook you up to a brain scanner that scans your thoughts and then tortures you by producing exactly the worst outcome that you can think of. The more rationally you think (e.g. by thinking about how you might escape back to your family), the worse the outcomes will be for you (e.g. the machine captures and kills your family right in front of you)

Roko: "Your thoughts mustn't affect reality except by affecting what your body does."

Since thoughts are based on some physical reality, everyone's thoughts MUST affect reality in other ways besides affecting what your body does (i.e. for example, your thoughts cause (or are caused by) electrical activity and blood flow in your brain, physical actions you did not choose). So by your argument, our ability to act rationally is necessarily fundamentally compromised.

Can someone who is aware beforehand--as Allen is--that a decision will warp their reasoning ever sincerely commit to that decision? Wouldn't he find the cognitive dissonance required for such a choice incessantly distracting? I think it would be strange if our epistemologies were really that malleable.

Or are we talking about choosing to be effectively brainwashed? That'd be a big ol' philosophical can of worms.

A footnote on AA: I realize this is framed in terms of Allen's beliefs, but I also realize people tend to forget framing devices and sources of information. AA has a lot of local variation about religious belief. Some groups completely identify a "higher power" with God, while others leave it up to the individual. One solution for atheists is to identify the group as their higher power.

It seems that whenever you choose an action, you are going to indirectly change your beliefs because of availability bias. Different courses of action are going to expose you to different pieces of evidence.

BTW, Pascal's wager can be extended to multiple possibilities by Bayesian Decision Theory. Whenever you are faced with a number of mutually exclusive choices, you should choose the one that maximizes the expected value (likelihood times utility) not the choice that is most likely to be true. People who choose to join start ups are making this calculation, since most start ups fail.

"If this condition fails, then it can become impossible to act rationally."

This seems true only if you reject Eliezer's notion that rationality is fundamentally about winning, rather than about following a particular ritual of cognition.


Many statements about ethics, values or agency become nonsense when we talk in terms of the actual laws of physics. But things make sense again when we talk about approximations to the laws of physics; in the usual approximate language that we use everyday, my thoughts only affect reality through my actions, because the minute changes that occur in my brain when I think are too small to see at the "everyday" level of approximation.

I like the way that the metaphor of Allen's dilemma casts light on Pascal's Wager but there's an important distinction the metaphor doesn't connect to. Pascal's Wager is about hypothetical, unobservable consequences, while Allen's problem is based on statistical evidence. Allen can look at the world and see how many people are beset by a similar problem, and how many of them are able to solve their problem with the help of AA. He can see how many become religious and what consequences this has on their subsequent life.

As EY has pointed out several times, Pascal's consequences are a stab in the dark and a hypothesis before evidence. It's hard to take Pascal's proposal seriously. The argument requires that we give the hypothetical infinite weight, but there's still zero empirical evidence for it. Allen may be making the right choice, but Pascal was clearly wrong.

I don't think the AA example does what you say it does. Allen is not choosing to believe in god because of the consequences of that belief. He is choosing a course of action (joining AA) based on the consequences of that action. He judges that the beneficial consequences (sobriety) outweigh the detrimental ones (inaccurate beliefs). When he actually changes his mind about god, he won't be doing it for the consequences of the belief, he will be doing it because of the social pressure. And though he knows now that this will be his true reason for changing his mind; at the time he actually changes it he will come up with some rationalization that will obscure this fact.

Belief that in itself leads to certain outcome is action. To rationally determine actions, you need valid beliefs about their consequences.

Larry: take the modification I offered a few paragraphs in, and say that it's the belief in God that makes AA efficacious.

Nancy: thanks for the footnote -- nothing I say here should be taken to imply anything about AA, it's merely a convenient placeholder for a hypothetical case (if only because I don't KNOW anything about AA in general -- beyond reading the twelve steps online, I'm completely ignorant of how AA functions).

Paul: Ah, you're right. If the belief itself is what makes AA work then it seems that that in this case that the act of adopting an irrational belief is a a rational action. But I think this scenario is of a different character than Pascal's. Allen's predicament arises out of his own weak abilities to either make rational choices as he is or to modify himself so he does better. If Allen could alter himself so he became violently ill as soon as he started drinking, this would be a better choice than adopting false beliefs to achieve the same end. In Pascal's case the consequence-of-belief doesn't arise out of Pascal's own weaknesses, but out of the hypothetical scenario that God exists, cares about what Pascal thinks, and will read Pascals mind to find out.

It seems like this whole discussion is contradicting your previous insight that rational agents should always try to win (http://www.overcomingbias.com/2008/01/newcombs-proble/comments/page/2/).

It seems to me that

.7*(no puppies saved) < .3(1 puppy saved)

and hence that we should opt for the latter, regardless of any silliness about what the "truth" is. The rational decision is the one that maximizes the number of (puppies saved), not the one that is most likely to be "true".

When I read "Newcomb's problem.." my first thought was actually, "Doesn't this justify Pascal's wager?" Now you seem to be contradicting yourself and I'm not entirely sure why.

Well taken point from NL on framing the terms of debate.

The commonality between Pascal's wager and the calculation by the subject, Allen, is that both appear to make their beliefs about individual future wellbeing dependent upon *imagined* consequences of presently formulated beliefs, rather than upon rational analysis of evidence.

Pascal reaches his conclusions through deductive logic as well as intuition, while Allen consciously submits to one of a cluster of social constructions calculated to reinforce desired future outcomes. If doing so runs counter to his intuition, then his case is already distinguishable from that of Pascal.

At a less abstract level, Pascal's purpose was to dissuade contemporary nonbelievers from shrinking from religion out of fear of its truth, and rather to have them approach it out of hope that it is so. Nowhere does he portray himself as an unbiased observer. He concludes his wager section by observing to the effect that he would be more worried about being in error and then discovering [his theistic beliefs] to be true after all, than not being in error while believing them true. So, for Pascal and Allen, their underlying purposive action appears to be to generate beliefs which serve as emotional markers toward something else, rather than to discover truths.

Seen another way, may not a central belief such as the one under discussion, formed through conscious will, be viewed metaphorically as a compass? While a compass does not in a physical sense steer a ship, a series of successive moves of the rudder can be made by reference to readings of the compass. Is the activity of Pascal and Allen much different from fashioning compasses by which to steer? I don't know.

"if consequences are simply inadmissible in belief-formation processes"
then what possible objection could be raised to someone holding any belief whatsoever?

If the belief itself is what makes AA work then it seems that that in this case that the act of adopting an irrational belief is a rational action.

If we must determine how we believe regarding the program before we can determine its effectiveness, its effectiveness is undefined until we reach a conclusion. Given the assumption that we can choose our beliefs, the 'effectiveness' is a cipher, a null and empty variable. What matters is the choice - that leads directly to probable outcomes.

If choosing A leads to a greater probable outcome than ~A, we simply choose A.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30