« Two Visions Of Heritage | Main | The linear-scaling error »

December 10, 2008


Silly typo: I'm sure you meant 4:1, not 8:1.

Ergh, yeah, modified it away from 90% and 9:1 and was just silly, I guess. See, now there's a justified example of object-level disagreement - if not, perhaps, common knowledge of disagreement.

Coming from a background in scientific instruments, I always find this kind of analysis a bit jarring with its infinite regress involving the rational, self-interested actor at the core.

Of course two instruments will agree if they share the same nature, within the same environment, measuring the same object. You can map onto that a model of priors, likelihood function and observed evidence if you wish. Translated to agreement between two agents, the only thing remaining is an effective model of the relationship of the observer to the observed.

Of course if you knew that your disputant would only disagree with you when one of these three conditions clearly held, you would take their persistent disagreement as showing one of these conditions held, and then back off and stop disagreeing. So to apply these conditions you need the additional implicit condition that they do not believe that you could only disagree under one of these conditions.

There's also an assumption that ideal rationality is coherent (and even rational) for bounded agents like ourselves. Probability theorist and epistemologist John Pollock has launched a series challenge to this model of decision making in his recent 06 book Thinking About Acting.

Should be 'a serious challenge'

You'll find the whole thing pretty interesting, although it concerns decision theory more than the rationality of belief, although these are deeply connected (the connection is an interesting topic for speculation in itself). Here's a brief summary of the book. I'm pretty partial to it.

Thinking about Acting: Logical Foundations for Rational Decision Making (Oxford University Press, 2006).

The objective of this book is to produce a theory of rational decision making for realistically resource-bounded agents. My interest is not in "What should I do if I were an ideal agent?", but rather, "What should I do given that I am who I am, with all my actual cognitive limitations?"

The book has three parts. Part One addresses the question of where the values come from that agents use in rational decision making. The most common view among philosophers is that they are based on preferences, but I argue that this is computationally impossible. I propose an alternative theory somewhat reminiscent of Bentham, and explore how human beings actually arrive at values and how they use them in decision making.

Part Two investigates the knowledge of probability that is required for decision-theoretic reasoning. I argue that subjective probability makes no sense as applied to realistic agents. I sketch a theory of objective probability to put in its place. Then I use that to define a variety of causal probability and argue that this is the kind of probability presupposed by rational decision making. So what is to be defended is a variety of causal decision theory.

Part Three explores how these values and probabilities are to be used in decision making. In chapter eight, it is argued first that actions cannot be evaluated in terms of their expected values as ordinarily defined, because that does not take account of the fact that a cognizer may be unable to perform an action, and may even be unable to try to perform it. An alternative notion of "expected utility" is defined to be used in place of expected values. In chapter nine it is argued that individual actions cannot be the proper objects of decision-theoretic evaluation. We must instead choose plans, and select actions indirectly on the grounds that they are prescribed by the plans we adopt. However, our objective cannot be to find plans with maximal expected utilities. Plans cannot be meaningfully compared in that way. An alternative, called "locally global planning", is proposed. According to locally global planning, individual plans are to be assessed in terms of their contribution to the cognizer's "master plan". Again, the objective cannot be to find master plans with maximal expected utilities, because there may be none, and even if they are, finding them is not a computationally feasible task for real agents. Instead, the objective must be to find good master plans, and improve them as better ones come along. It is argued that there are computationally feasible ways of doing this, based on defeasible reasoning about values and probabilities.

Shouldn't your updating also depend on the relative number of trials? (experience)

Part of this disagreement seems to be what kinds of evidence are relevant to the object level predictions.

Interesting essay - this is my favorite topic right now. I am very happy to see that you clearly say, "Shifting beliefs is not a concession that you make for the sake of others, expecting something in return; it is an advantage you take for your own benefit, to improve your own map of the world." That is the key idea here. However I am not so happy about some other comments:

"if you want to persuade a rationalist to shift belief to match yours"

You should never want this, not if you are a truth-seeker! I hope you mean this to be a desire of con artists and other criminals. Persuasion is evil, is in direct opposition to the goal of overcoming bias and reaching the truth. Do you agree?

"the frame of mind of justification and having clear reasons to point to in front of others, is itself antithetical to the spirit of resolving disagreements"

Such an attitude is not merely opposed to the spirit of resolving disagreements, it is an overwhelming obstacle to your own truth seeking. You must seek out and overcome this frame of mind at all costs. Agreed?

And what do you think would happen if you were forced to resolve a disagreement without making any arguments, object-level or meta; but merely by taking turns reciting your quantitative estimates of likelihood? Do you think you could reach an agreement in that case, or would it be hopeless?

How about if it were an issue that you were not too heavily invested in - say, which of a couple of upcoming movies will have greater box office receipts? Suppose you and a rationalist-wannabe like Robin had a difference of opinion on this, and you merely recited your estimates. Remember your only goal is to reach the truth (perhaps you will be rewarded if you guess right). Do you think you would reach agreement, or fail?

"How about if it were an issue that you were not too heavily invested in [...]"

Hal, the sort of thing you suggest has already been tried a few times over at Black Belt Bayesian; check it out.

Two ideal Bayesians cannot have common knowledge of disagreement; this is a theorem.

To quote from AGREEING TO DISAGREE, By Robert J. Aumann

If two people have the same priors, and their posteriors for a given event A are common knowledge, then these posteriors must be equal. This is so even though they may base their posteriors on quite different information. In brief, people with the same priors cannot agree to disagree. [...]

The key notion is that of 'common knowledge.' Call the two people 1 and 2. When we say that an event is "common knowledge," we mean more than just that both 1 and 2 know it; we require also that 1 knows that 2 knows it, 2 knows that 1 knows it, 1 knows that 2 knows that 1 knows it, and so on. For example, if 1 and 2 are both present when the event happens and see each other there, then the event becomes common knowledge. In our case, if 1 and 2 tell each other their posteriors and trust each other, then the posteriors are common knowledge. The result is not true if we merely assume that the persons know each other's posteriors.

So: the "two ideal Bayesians" also need to have "the same priors" - and the term "common knowledge" is being used in an esoteric technical sense. The implications are that both participants need to be motivated to create a pool of shared knowledge. That effectively means they need to want to believe the truth, and to purvey the truth to others. If they have other goals "common knowledge" is much less likely to be reached. We know from evolutionary biology that such goals are not the top priority for most organisms. Organisms of the same species often have conflicting goals - in that each wants to propagate their own genes, at the expense of those of their competitors - and in the case of conflicting goals, the situation is particularly bad.

So: both parties being Bayesians is not enough to invoke Aumann's result. The parties also need common priors and a special type of motivation which it is reasonable to expect to be rare.

Um... since we're on the subject of disagreement mechanics, is there any way for Robin or Eliezer to concede points/arguments/details without loosing status? If that could be solved somehow then I suspect the dicussion would be much more productive.

PK: Unfortunately, no. Arguing isn't about being informed. If they both actually 'Overcame Bias' we'd supposedly lose all respect for them. They have to trade that off with the fact that if they stick to stupid details in the face of overwhelming evidence we also lose respect.

Of the '12 virtues' Eleizer mentions, that 'argument' one is the least appealing. The quality of the independent posts around here is far higher than the argumentative ones. Still, it does quite clearly demonstrate the difficulties of Aumman's ideas in practice.

Let me break down these "justifications" a little:

Clearly, the Other's object-level arguments are flawed; no amount of trust that I can have for another person will make me believe that rocks fall upward.
This points to the fact that the other is irrational. It is perfectly reasonable for two people to disagree when at least one of them is irrational. (It might be enough to argue that at least one of the two of you is irrational, since it is possible that your own reasoning apparatus is badly broken.)
Clearly, the Other is not taking my arguments into account; there's an obvious asymmetry in how well I understand them and have integrated their evidence, versus how much they understand me and have integrated mine.
This would not actually explain the disagreement. Even an Other who refused to study your arguments (say, he didn't have time), but who nevertheless maintains his position, should be evidence that he has good reason for his views. Otherwise, why would your own greater understanding of the arguments on both sides (not to mention your own persistence in your position) not persuade him? Assuming he is rational (and thinks you are, etc) the only possible explanation is that he has good reasons, something you are not seeing. And that should persuade you to start changing your mind.
Clearly, the Other is completely biased in how much they trust themselves over others, versus how I humbly and evenhandedly discount my own beliefs alongside theirs.
Again this is basically evidence that he is irrational, and reduces to case 1.

The Aumann results require that the two of your are honest, truth-seeking Bayesian wannabes, to first approximation, and that you see each other that way. The key idea is not whether the two of you can understand each other's arguments, but that refusal to change position sends a very strong signal about the strength of the evidence.

If the two of you are wrapping things up by preparing to agree to disagree, you have to bite the bullet and say that the other is being irrational, or is lying, or is not truth seeking. There is no respectful way to agree to disagree. You must either be extremely rude, or reach agreement.

Don't you think it's possible to consider someone irrational or non-truthseeking enough to maintain disagreement on one issue, but still respect them on the whole?

If you regard persistent disagreement as disrespectful, and disrespecting someone as bad, this is likely to bias you towards agreeing.

Hal, it also requires that you see each other as seeing each other that way, that you see each other as seeing each other as seeing each other that way, that you see each other as seeing each other as seeing each other as seeing each other that way, and so on.

The Aumann results require that the two of your are honest, truth-seeking Bayesian wannabes, to first approximation, and that you see each other that way.

You also have to have the time, motivation and energy to share your knowledge. If some brat comes up to you and tells you that he's a card-carrying Bayesian initiate - and that p(rise(tomorrow,sun)) < 0.0000000001 - and challenges you to prove him wrong - you would probably just think he had acquired an odd prior somehow - and ignore him.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30