« What is Medical Quality? | Main | On Not Having an Advance Abyssal Plan »

February 23, 2009


This looks like a topic for Clark Kent, err... sorry, I mean Hopefully Anonymous.

Personally, Robin's analysis seems right on here. I'd love to hear some suggestions about what can be done about this sort of bias in society even once it has been identified, and also about how to identify it both in society and in our own lives. Even advice on how to overcome this sort of bias in our own lives once a particular case of it has been identified would be nice.

Also, a relevant link

We must use this to our advantage. Get rationalists into prestigious positions or convert prestigious people to rationalism. The populace will then tout rationalism.

A central easily searchable database of track records would be the best thing since sliced Wikipedia, except almost nobody does it right now. I've seen a few track records, but they are either self-selected, for example War Nerd has an impressive collection of predictions that came true, he just skips the ones that didn't work like number of attackers in Mumbai (I still estimate he's far more accurate than anything in mainstream media in spite of that). Or they're selected by the opponents, and only show the most spectacularly failed predictions like those on Peak Oil Debunked, for example http://peakoildebunked.blogspot.com/2006/04/279-many-wrong-predictions-of-ken.html or what Jon Steward does with politicians on a regular basis.

I think even those highly biased track records have very high value, systematic neutral track records would be really awesome and could genuinely push discussion about politics and macroeconomy forward from its rather dismal current state.

Tomasz, almost most no fields do this now, but my point is that macro econ is one of the main exceptions. Weather is another. And yet prestige in macro or weather has little to do with forecast accuracy. So making forecast track records in other fields will probably not influence prestige there much either.

Cannibal, that is the standard strategy most groups use to try to gain influence - try to gain prestige. It is far from easy.

"It turns out that many macro economists frequently forecast future macro events. Furthermore, many places keep standardized track records of such forecasts, records that can be compared for accuracy; we can compare the accuracy of such folks! Isn't that be an ideal basis on which to decide who to believe?"

Where are these standardized track records?

If they're not available online or from a basic library, can you give us the names of a few of the most consistently accurate macro economists?

If there are some highly accurate but non-prestigious economists, shouldn't somebody who's rational enough not to care too much about prestige be able to make a lot of money (in the long term) from their forecasts?

Bryon is also not talking about people with made successful forecasts.
He names [url=http://en.wikipedia.org/wiki/Arthur_Laffer]Arthur Laffer[/url] as a real economist who made a betted his credibility on the forcast that there wouldn't be a recession.

I would bet the effects of status are a lot stronger in political economics than they are in medicine. In the later, sick people have extremely strong incentives not to care about someone's status. Similarly, some actors have very strong incentives to seek out accurate weathermen, such as airports and airline companies.

On the other hand, politics is all about being well-perceived by ill-informed masses. Democracy should select for actors who value prestige and popularity in their experts. Accuracy doesn't seem like it would valued nearly as much.

I'm not sure if Robin is saying valuing prestige over accuracy is common among all experts, or if it comes down to incentives and selection processes? I'd say more the later.

Shard, I'd bet that already happens. I don't know any investors who take anything but the most general advice (e.g., Warren Buffet) from people seen as prestigious in the public eye.

One more question. Is there any way for reasonably accurate macroeconomist to make good money off his predictions? In most fields the answer to such question is a straight no, but can you think of some way to do that in macroeconomy? There are some industries that do well in a recession (let's say entertainment), or do well without a recession (let's say advertisement), or depending on unemployment rates etc. There might be a way to pull it off.

We should also seriously consider the possibility that nobody has any clue at all, as an extreme variant of efficient market hypothesis.

David Brin has been calling for a "predictions registry" for some time...

I blog about this quite a bit, possibly more than anyone else.

I think the single biggest reason politicians and reporters don't focus on the relavent experts in general (macro is far from unique on this) is that they and their audiences are gaming with domain experts for status. It's a unilateral status disarmament to give strong deference to the consensus of credentialed experts on a topic. Frankly, I think this provides some of the support for folks like Eliezer.

An alternative reason that I think is sometimes situationally true is that the top experts on a topic are often not the most entertaining (a consideration for journalists) or politically popular with a constituency (a consideration for politicians) people willing to speak on a topic or make a prediction on a topic. However, sometimes I think the domain experts are at least equally entertaining --but there is a desire to check their status by not offering public platform and deference to them on the domain of their expertise.

An archetypal example of the weird disconnect between domain experts and public attention is famed criminal law expert Alan Dershowitz and famed linguistics expert Noam Chomsky's debate at the Kennedy School about, of all things, the geopolitics of Israel and Palestine.

But with the rise of academic blogging, websites, and effective search engines, I think we have a unique ascension of domain experts breaking through the filters of journalists and generalist public intellectuals to make their predictions, analyses and commentaries widely accessible to those interested.

If you weighted prediction accuracy by the cost to society for being wrong (i.e. utility) -- I'm guessing the overwhelming majority of macro-economists would in a worse state than Citi or BoA. If you were to extend the analogy to medicine -- I really doubt anyone would trust the same doctor who amputated the wrong limb to operate on them again -- but it seems like we don't mind listening to the same economists.

There are a couple pretty public predictions of collapse -- Roubini, Taleb (to a lesser extent), and several people who shorted MBS and made a ton of $$$. So the information is out there -- none of these people seem to be creating policy.

It seems unlikely to me that politicians or normal citizens are capable of evaluating economists. Even in the situation described above (where there is somewhat of an objective metric based on the accuracy of past predictions), I don't think it is so easy for laymen to distinguish good economists from bad ones. Lord knows, there are hundreds of pseudo-economists out there who are claiming to have predicted this crisis. Some people choose how they will listen to based on this, but most (rightly, I think) do not. I would need to know a lot more about the effectiveness and impartiality of the tracking system you describe before I started criticizing the public for using it.

It seems to me, Robin, that your explanation does not appreciates just how limited most people (myself included) are in their ability to evaluate economists or other experts. For better or worse, they rely on the economists to evaluate themselves (hopefully taking into account prediction accuracy) through peer review and the distribution of prestigious positions.

So you are correct that the causation goes to , but I don't think it's because people just want to associate with prestige. It's because prestige is the best metric they have.

My last paragraph should read

"So you are correct that the causation goes prestige to a belief in accuracy, but I don't think it's because people just want to associate with prestige. It's because prestige is the best metric they have for accuracy."


Those who understand the game best score the most. Thus, the real experts: your friendly helpful always smiling bankers.
Those who understand the economy best make the most money in it. One hint: it helps to be on the receiving side of compound interest. Let it work its magic, you know.

Two things occur to me that seem to be missing above:

One, without any reason to think otherwise, I am more trustful of a Harvard academic than one chosen at random because I presume that Harvard University has applied rigorous vetting to its applicants. Unless you know a lot about economics, it is probably better to trust that Harvard and other prestigious universities do a better job at identifying the top experts than you would, by studying some metric you invented.

Second, one problem with using forecasts is if the forecasts aren't really based on rigorous understanding, you will be selecting "experts" by chance. Consider for instance, selecting an investment manager by past performance. I would think, if you applied this strategy to mutual funds, you would lose money (at least relative market average). That's because, by and large a mutual fund outperforms average by sheer luck (you suffer for their luck because a fraction of their luck involves current assets being fortuitously overpriced -- I believe after strong performance for some time a mutual fund typically lulls).

If prestigious universities do not use forecasting ability to select applicants, perhaps this reflects a realization among experts that such forecasting is presumptuous and naive, or otherwise not an indicator of expertise.

A better analogy occurs to me: your friend tells you about a friend who predicted the winner of the last four Superbowls. How much money do you put on his prediction next year, when the time arises?

Perhaps you're willing to entertain a modest bet, but you shouldn't make any real sacrifice. The reason is we know, to a high degree, winning a football game against a team of comparable ability depends on chance circumstances, and although this friend-of-friend's success might reflect some good analysis, for all you know he could have just been lucky. [By adding more successful predictions we can make the probability lower, but if we also add more friend-of-friends making those predictions we increase the chance that _someone_ will do very well.]

@ Robin Hanson: I never learned much macro-econ; they didn't respect it at Caltech where I got my Ph.D.

Interesting wording in a post about "prestigious economists", some of whom are criticized for not being actual degreed economists, and the unreliability of prestige in general. According to the George Mason Econ faculty bios (and his own personal CV page), Robin Hanson has no formal degree in economics at any level. From this post, I would say the casual reader would be led to believe that he has a PhD in econ from the prestigious school of Cal Tech. This would seem to suggest a need to signal more prestige for whatever reason.

Yes, I am aware that the opening statement does not specifically say that he has a PhD in economics.

retired (who calls himself "anon" here), I most certainly do have a Ph.D.

Mike, why is it easier to forecast well by chance than to by chance become a Harvard prof with accurate advice?

Robin: To be Hansonesque, I did not say you do not have a Ph.D.

Sorry about the "anon" nom-de-plume. I chose the wrong Type Pad drop-down name.

Cannibal, that is the standard strategy most groups use to try to gain influence - try to gain prestige. It is far from easy.

Really, Robin? Do any of your more prestigious professor friends get letters from group that desires influence? Have you ever heard of a group awarding a scholarship to a member? Heck, do these groups even hire PR firms?

What are some of the tactics used by groups that are trying to gain prestige? I'm not sure celebrity Scientologists count as "prestigious".

One, without any reason to think otherwise, I am more trustful of a Harvard academic than one chosen at random because I presume that Harvard University has applied rigorous vetting to its applicants.
They have. They make sure to admit only people with rich parents.
Consider for instance, selecting an investment manager by past performance. I would think, if you applied this strategy to mutual funds, you would lose money (at least relative market average).
You would. I've looked into it. If you know how to make money by shorting them, please tell me. :)

I second Yvain's question.

I'd be curious to hear the story about Caltech's lack of respect for macroecon. I certainly lack respect for macro. I'd be interested in hearing what Robin thinks Caltech is trying to signal by being different and not having more formalized academic departments and having "divisions". Does that just breakdown into profs acting like they are in departments anyway? Was going through the Ph.D program that was called a Ph.D in Social Science different in any substantive way from what Robin sees in GMU's Ph.D program or others?

To clarify, was going through the Caltech Social Science Ph.D program substantially different than going through a GMU Econ PH.D program or a poli sci Ph.D program, in Robin's estimation? Other than the fact that some schools will have profs who want to stress the importance of macroecon, where Caltech seemed not to respect it, of course.

You're missing the point. The financial system is one big scam. Consider the Compound Interest Paradox (http://fskrealityguide.blogspot.com/2007/06/compound-interest-paradox.html). Most people have either one of two strong reactions to that post. They either say "Very insightful, FSK!" or "FSK is full of ****! FSK should go take an introductory economics course!"

Economics is one of those issues where the vast majority of people can't think rationally, especially those who have received a lot of "education"/brainwashing.

Publicise the high status individuals participating in prediction markets? Personalise them?

It seems that many notable academics in Australia seemingly get the influence they have they are more because they have a compelling style of public speaking and slot directly into some stereotype for a wise academic.

Robin, there are lots of interesting micro issues in this crisis that you can talk about. What about this? I listed a few more issues here.

From Wei Dai's blog: "It seems that some people did notice the flaws, and tried to short the market, but there is so much "dumb money" out there which can easily overwhelm "smart money" on a timescale of years. Is this a problem for decision markets? Why or why not?" -- this is the sort of microeconomic issue related to the crisis that I personally want to see more discussion about. But maybe its not an issue of insufficient demand - people really seem not to know what to write. And microeconomists, unlike macroeconomists, don't like to just make up answers.

@Wei Dai

"interesting micro issues"

Indeed, there are some issues of incentives, policy design and signaling that Robin could discuss. But I'm impressed by Robin's willingness to admit he doesn't know very much about banking practice and so is slow to discuss them. I am unimpressed by over-frequent authors whose repeated writings display a terrible ignorance of banking practice. And so while their "economic discussions" may have meaning in some abstract realm, they are fundamentally laughable from an actual banking perspective.

As for the article against David X. Li, wow, what a hater. Li is a genius. The article is wrong to attempt to contrast him with Paul Wilmott. Altho' there is one quote saying it's not Li's fault, the overall article is certainly biased against Li. Wilmott, the father of all quant basically, said long ago these models weren't perfect and shouldn't be the end-all. Li's model was meant to be one tool to help make rating issues more tractable - it was never meant to be used alone, without other metrics, it was never meant to rate a billion-dollar deal in 20 minutes, and Li always said so.

When children play with handguns it's the parents' fault, not Eli Whitney's.

Wei, reading an article or two would not make me an expert.

Zac, persistence of stupidity is a problem for most any consensus institution; I can't see why it would be more a problem with decision markets than with other such institutions.

To Philip Goetz, who commented on one of my comments: I was referring to applicants for faculty positions at Harvard. I presume that the existing faculty at Harvard does the best job they believe they can to select the best future faculty -- and I don't see any reason to believe they fail to do so. Of course I use "Harvard" just to pick a random prestigious school, not to suggest it's distinctly better than the rest.

To Robin Hanson: selection of "Harvard" faculty is chance insofar as experts are unable to rate the level of expertise of their colleagues, while forecasting economics is chance insofar as experts are unable to understand the world to make forecasts. It seems straightforward to me that experts can, in principle, rate the expertise of their colleagues, while at the same time not be able to make forecasts. I guess this draws the question what is valued as "expertise," when it doesn't involve forecasts. But I think we have a good sense of a person's creativity, aptitude, and knowledge -- in deep theoretical physics there are few predictions too, but still it is very clear who are the best physicists.

When children play with handguns it's the parents' fault, not Eli Whitney's.

Who is supposed to be the parents here? The government? In that case it's more like handing a gun to a kid playing in the street plainly without adult supervision.

Although Li did warn against misuse of his research, we can and should still blame him for not warning often enough, and loud enough. Even if that isn't fair, it's good if it decreases the probability of the next crisis by even a small amount (by teaching other researchers to give more and louder warnings in similar situations).

@Wei Dai

"Who is supposed to be the parents here?"

Dick Fuld. John Thain. And those other guys with the legal fiduciary responsibility - I believe they are sometimes known as "Directors?" They are supposed to be, you know, the grown-ups here?

"we can and should still blame him for not warning often enough, and loud enough"

What would have been the amount of warning that would, without hindsight, have been "enough?"

When Li had returned to China, Wilmott and several other well-known quants and even quant critics such as Taleb picked up this role, shouting from the rooftops (or in the case of Wilmott, from the pages of his glossy quant magazine), on the need to use the model responsibly, to be aware of its dangers? This wasn't enough?

With all due respect, Wei Dai, what did you want? A personal letter to your house? All the quants knew the issues, they weren't a secret.

"While Bryan reports "almost none of the economic `experts' pontificating [in the media] on Obama's economic plan are actually [degreed] economists"

the updated but still interesting list of PhD economists who have appeared on recent chat shows is here.

There seems to be a problem that I can think of regarding prediction meters. In areas such as macro economics and finance in general (as relates to the crisis) there are plenty of people who predicted accurately what would happen and even more who got it wrong. But the ones who got it correct might not have the same insights that would make them better suited to finding a way out (think technical traders) and some who predicted parts inaccurately might have had their intuition correct but were wrong in this scenario (Jim Rogers?). It is a complex process with multiple variables all of which interact with each other making predictions as much art as science. It would lead down the same problems such as the problem of herd mentality which led most banks/traders down the same path till they collapse. If we had polled predictions just before the collapse, all the guys who said it will continue forever would have been most accurate, but if we did it after that status would have reversed overnight.

For forecast based selection to be accurate we would need forecasts over a long period to see if they're valid or just statistical anomalies in the way of the monkey-Shakespeare problem. This method still has same problems, but lesser of them because now the bubbles might be a ten year event instead of a five year one. What is needed is multiple opinions, some necessarily contrarian, with constant debate about the viability of any policy and continuation of a course of action.

That being said, I am an economist and a portfolio manager - that information might be relevant in overcoming my personal bias.

"It is a complex process with multiple variables all of which interact with each other making predictions as much art as science."

I've seen this line throughout my schooling and professional life. Is it anything more than bullshit? Would we be more accurate to use the latest language of cognitive science and say "as much system 1 intelligence as system 2 intelligence"? Or alternatively, is what a line like this really means "as much bullshit as science"? Where, and what, is the non-science art that helps increase predictive accuracy?

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30