## April 01, 2007

If this were anything like my high school math class, everyone else in the class would decide to copy my answer. In some cases, I have darn good reasons to believe I am significantly better than the average of the group I find myself in. For example, I give one of my freshman chemistry midterms. The test was multiple choice, with five possible answers for each question. My score was an 85 out of 100, among the highest in the class. The average was something like 42. On the final exam in that class, I had such confidence in my own answer that I declared that, for one of the questions, the correct answer was not among the responses offered - and I was right; one of the values in the problem was not what the professor intended it to be. I was also the only one in the class who had enough confidence to raise an objection to the question.

On the other hand, there are situations in which I would reasonably expect my estimate to be worse than average. If I wandered into the wrong classroom and had no idea what the professor was talking about, I'd definitely defer to the other students. If you ask me to predict the final score of a game between two well-known sports teams, I probably wouldn't have heard of either of them and just choose something at random. (The average American can name the two teams playing in the Super Bowl when it occurs. I rarely can, and I don't know whether to be proud or ashamed of this.) I also suspect that I routinely overestimate my chances of winning any given game of Magic. ;)

I'm not a random member of any group; I'm me, and I have a reasonable (if probably biased, given the current state of knowledge in psychology) grasp of my own relative standing within many groups.

Also, when you're told that there is a hidden gotcha, sometimes you can find it if you start looking; this is also new information. Of course, you can often can pick apart any given hypothetical situation used to illustrate a point, but I don't know if that matters.

This is a good point. I think squared errors are often used because they are always positive and also analytic - you can take derivatives and get smooth functions. But for many problems they are not especially appropriate.

Informally problems are often posed with an absolute-value error function. Like the square root, this has a cusp at zero and so will "hold water". If some people miss too high and others miss too low, then in this case it also makes sense to switch to the average. If everyone misses on the same side, then it doesn't help but doesn't hurt to switch to the average. So in general it is a good strategy.

I mentioned the other day one example of the good performance of the average in "guessing beans in a jar" type problems. In this case the average came out 3rd best compared to guesses from a class of 73 students. This implicitly uses an absolute-value error function and the problem was such that people missed on both sides. Jensen's Inequality shows why averages work well in such problems.

Eliezer, given opinions on some variable X, majoritarianism is not committed to the view that your optimal choice facing any cost function is E[X]. The claim should instead be that the best choice is some average appropriate to the problem. Since you haven't analyzed what is the optimal choice in the situation you offer, how can we tell that majoritarianism in fact gives the wrong answer here?

Hal, the surprising part of the beans-in-a-jar problem is that the guessers must collectively act as an unbiased estimator - their errors must nearly all cancel out, so that variance accounts for nearly all of the error, and systematic bias for none of it. Jensen's Inequality does not account for this surprising fact, it only takes advantage of it.

Robin, I don't claim to frame any general rule for compromising, except for the immodest first-order solution that I actually use: treat other people's verbal behavior as Bayesian evidence whose meaning is determined by your causal model of how their minds work - even if this means disagreeing with the majority. In the situation I framed, I'd listen to the other math students talking, offer my own suggestions, and see if we could find the hidden gotcha. If there is a principle higher than this, I have not seen it.

Eliezer, my best reading of majoritarianism is that it advises averaging the most recent individual probability distributions, and then having each person use expected utility with that combined distribution to make his choice.

In your example, you have students pick "estimates," average them, give them new info and a new cost function, and then complain that the average of the old estimates, ignoring the new info, does not optimize the new cost function.

One would have a severe framing problem if one adopted a rule that one's estimate of X should be the average across people of their estimates, E[X]. This is because a translation of variables to F(X) might be just as natural a way to describe one's estimates, but as Eliezer points out, E[F(X)] usually differs from F(E[X]). So I think it makes more sense to average probabilities, rather than point estimates.

Robin, that's a fair reply for saving majoritarianism. But it doesn't save the argument from bias-variance decomposition, except in the special case where the loss function is equal to the squared difference for environmental or moral reasons - that is, we are genuinely using point scalar estimates and squared error for some reason or other. The natural loss function for probabilities is the log score, to which the bias-variance decomposition does not apply, although Jensen's Inequality does. (As I acknowledged in my earlier post on The Modesty Argument.)

This leaves us with the core question as "Can you legitimately believe yourself to be above-average?" or "Is keeping your own opinion like being a randomly selected agent?" which I think was always the key issue to begin with.

The comments to this entry are closed.

## May 2009

Sun Mon Tue Wed Thu Fri Sat
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31