« College Prestige Lies | Main | Echo Chamber Confidence »

March 24, 2009


The editors are usually biased in a way that they are more likely to accept a paper by a well-known scientist from respected institution than unknown author's article. Publishing rejects could possibly reinforce the effect - less-known authors would be more afraid of sending papers to respected journals.

Which leads to question why the review process isn't double-blinded (meaning that the reviewer wouldn't know who is the author). Sometimes the author can be guessed from the contents of the paper, but not always.

"It would also raise the bar for editors; readers could see how often rejected papers were accepted at equal or better journals, and potential authors could better evaluate their chances."

I don't think so. Potential authors could hardly learn anything about their chances by reading titles and dates. To learn anything they should have access to the rejected manuscript in order to be able to see the quality. I also doubt authors would try to publish the paper in a better journal after rejection. And it would introduce a new bias - journals will tend to reject papers already rejected elsewhere, without regard to quality.

I think such suggestion would not improve the objectivity of the reviewing process. On the contrary, it would probably increase the role of self-confidence and status of authors.

I agree with the comment above, I don't think this would work.

As soon as one person rejects a paper no-one else will touch it. It would create a huge amount of power within a tiny contingent of people.

As readers, we shouldn't care who rejected it should we? How many people rejected the Beatles before they were signed? How many people rejected the Harry Potter manuscript? How many fantastic ideas in the history of science were soundly rejected dozens, perhaps hundreds, of times before being accepted? Theory of evolution, theory of gravity etc...

Richard, if we care who accepts something, we care who rejects it; they are two sides of the same action.

prase, one can overweigh institutional affiliation with or without this; I don't understand how this makes it worse.

With this proposed change, would the reviewers be giving up anonymity?

If the publication of rejection history is to be implemented, the only way to consider it is if the reviewers are blinded to the rejection history of the papers they review.

>I don't understand how this makes it worse.

It's another way for bias to creep into the review process. This can cause good papers to be rejected for some reason other than their quality; isn't that enough?

That journal that publishes rejected papers, is it a nerdy joke? It looks like it. Look at this question & answer from the FAQ section of the Rejectia Mathematica (which also looks like a joke name):

Q My paper got rejected from Rejecta Mathematica. I'm not sure how to feel about that.
A Perhaps you should feel honored.

Richard, if we care who accepts something, we care who rejects it; they are two sides of the same action.

I think I'm not understanding some part of your reasoning. Accepting and rejecting are distinct actions with distinct objects, and thus it is possible for their values to come apart. This is in fact obvious under the current system: on the journal side, a journal considering whether to accept or reject doesn't care much whether other journals have already rejected it, but would care very much whether some other journal has already accepted it; and on the author side, obviously acceptance is a much bigger deal than rejection because it is more rare and requires more work and the rejections bring a relatively minor penalty. What you are proposing is a massive increase in how far rejections can penalize authors and in the potential for embarrassment for journals (e.g., by having them state for the public record that they have rejected papers, some of which might well go on to be important papers in the field; journals sometimes err, and perhaps massively, in what they accept and reject). In short, you seem to be suggesting that everyone advertise their failures, or at least take steps so that if they fail it will automatically be advertised; setting aside the question of whether it would be good, is it really that surprising that nobody prefers that over a system where their failures are not automatically advertised?

If it is much worse to suffer the embarrassment of the public rejection than it is good to have a paper accepted, which I think is a likely claim especially for already-established authors, I think the effect of this policy on any given journal will be: much fewer papers, good and bad, will be submitted. The journals, on the other hand, are perfectly willing to reject tons of papers so long as the incentive remains very high to send the very best papers to them. I think the way things are done now is fine, and very little would be gained (possibly much would be lost) by adding some element of public humiliation to the mix

I added to the post.

Brandon, by entering a sporting contest you commit to publicize your failures. It should be less impressive to win a sport contest that refused to publish its failures. And sport contests do in fact publish failures.

"Imagine a journal that published all its rejections, listing rejected authors, titles, and relevant dates." To this, you need to add:
1) The name of the reviewer.
2) Why the paper was rejected.

In this way, the reviewers would be subjected to scrutiny.

This is more like building a "DIGG of science", where people read papers, grade them and refer them around to friends an colleagues.

Several journals of negative results have launched recently, including one for evolutionary biology.

People are usually eager to signal confidence in their abilities; why in this context do people avoid such signals?

Because research is very hard, and people are not confident in their abilities? The secrecy of rejections permits people to be a little more risky by minimizing the reputational damage from fucking up?

Overall, I agree that this process would be unlikely to improve the objectiveness of the review process - though the fear it incites might improve the overall quality of papers.

The largest issue is that the bandwagon effect is terribly strong in peer-review science, and once advertised as rejected (even for trivial reasons) a paper would be hard-pressed to find acceptance elsewhere. Authors generally submit to their highest-expected journal first, and continue degrading until the submission finds a home. In some sense then, the final home already conveys information on Expected_Value.*Possible_Rejection_History. I think an analogous bandwagon effect is missing in your sports analogy Robin, where failure in one event does not directly bias your chances of success in the next.

However, some notable journals do ask to be informed of previous submission history. Science and Nature both ask to be informed of this. Nature is especially opposed to authors bouncing through the sub-Natures until one accepts.

prase: I agree a double blind process could improve review objectivity. There is the minor difficulty that it cannot be fully blind to all editors (who must make informed decisions about qualified reviewers), but even allowing for this I think it would be a real improvement.

Zac: the current system is actually *not* fine, peer review science is degenerating into a support-mechanism propping up increasing numbers of for-profit journals: the Publish-or-Perish mentality. I agree a change might be in order, but I'm not sure I support rejection tagging.

First I would want to find out how paper quality is related to the number of times a paper was rejected. My expectation is that most reviewers are so hurried, sloppy, prejudiced, or just plain stupid that, for papers that fall somewhere between "stupid" and "brilliant", chance plays a greater part than quality in acceptance or rejection.

I should clarify that I have more experience getting grant proposal reviews than journal article reviews. My impression, although I've not actually counted, is that most negative grant proposal reviews are negative because the reviewer either didn't understand the review criteria, didn't understand the proposal, or saw the proposal as advocating the views of a tribal enemy.

I guess part of the question is who would we trust to filter out potentially bad papers? Risk-averse researchers or editors and peer reviewers? I'd rather have a system in which people can fearlessly send out anything they think might be interesting, knowing the only consequence to a rejection is loss of time, and get the informational benefit of having editors and peer reviewers looking at it. (Unless workshops can take up the slack for that?)


Peer reviewers are usually unpaid volunteers, their time is scarce and behavior such as you describe contributes heavily to long delays from submission to publication.

If your objective is to better match supposed top quality research to supposed the top quality journals my guess is that this mechanism would make that goal less likely.

I suspect that this disclosure mechanism increase the potential for misleading information cascades. For example, given you have been fairly or unfairly rejected by Journal A, is Journal B, more likely or less likely to give your paper an objective review if it knows that Journal A rejected you. I think its less likely you will get a fair assessment from Journal B.

Carl, the price of the freedom to submit more stuff is the obligation to put in more time reviewing the work of others.

Why do no journals publish their rejections? Probably because their readers are not interested in wading through - or paying for - material that the editors judge to be a load of unprintable garbage.

I think this is an interesting idea.

Historically, I would guess journals do not publish rejected papers because it wastes resources to typeset and print them. The whole point of a selection process is one way or another to conserve resources.

However, nowadays journals (at least in fields in which I am familiar) are electronic. In physics, the papers are often available at a public online archive. This opens up more possibilities. A journal could simply make rejected papers available online, thus dramatically saving costs. Better yet, it might suffice to make available just the authors and the title (and maybe abstract, if it worried authors will change titles to disguise work). This saves dramatically on typesetting; meanwhile if the paper is published elsewhere, it can be found, if not, I presume it's not of much concern.

As more and more journals become for-profit, do we know that papers are rejected based on quality? I mean, a for-profit has tight space constraints. They may like a paper "enough" but just run out of space! Why not just pop everything onto the 'net that meets a certain quality, even if it doesn't have room for the paper version? Not being an academic, I don't know if this would work. . .

Assuming that no author is absolutely confident in his ability to be accepted, and that public rejection hurts publishing prospects at other papers; it seems that any one journal publishing the rejections would drive away papers of its prestige level (as they would approach a journal a rung lower rather than take the risk of dropping several rungs) on the other hand it would attract submissions of a much lower level hoping for a long shot. It seems that there are stable equilibriums at universal publication or non-publication). Perhaps a journal which is at the top of its pyramid such that it doesn't face significant competition for its paper submissions (not certain about relative prestige, but perhaps Nature would be at that level) might be able to do it unilaterally, but that type of journal can only lose from publishing rejection.

Re: A journal could simply make rejected papers available online

Any journal that published "rejected" papers in a corner of its web site would screw up the author's chances of getting their then "previously-published" papers accepted elsewhere. Authors would not stand for that - and would avoid that publication like the plague. So, journals do not do this through simple self-interest.

Most academic papers are rejected by several journals before some journal finally accepts them.

Is this really true? It is certainly not the case in my field (astrophysics) where being rejected by a journal is unheard-of unless you are a crackpot operating from some dank basement (in which case no other journal would accept the paper either).

As a journal editor, let me throw in a few observations. One, unmentioned so far, is that, believe it or not, most of us editors actually have sympathy for those who submit their papers to us. However, in economics, unlike perhaps in astrophysics, the majority of papers get rejected. At my journal the rejection rate is around 85%. Furthermore, about half the papers submitted get "desk rejected" by me upfront without being sent out for reviews (refereeing time is indeed a very scarce commodity).

So, we would be talking about huge numbers of papers, and talking about humiliating large numbers of people. For what purpose? Research is not a sports contest.

I also note that there is a rather famous history of famous papers that were rejected, some of them by many journals. There just is not much of a gain for the journals in doing this sort of thing.

Although no one has commented on my comment, I am going to add a bit more here. It is fine for the Mason lunch crowd that likes to bet with itself and blog hard until the food falls off the table to view research and acceptances and rejections as a sports contest, and certainly some parts of it can look like that, especially the tournament aspect of Nobel prizes or who gets the top chairs in the top departments and some other things of that sort. But I do not think journal publishing should resemble that, or at least certainly not as much.

Let me note that there are many people who find it hard even to submit papers to journals at all because they suffer great personal pain from the rejections they receive, which although usually the editors are more or less kindly, sometimes the referee reports can be harsh and insulting and demeaning, not to mention unfair and inaccurate. So, adding the stress of possibly being publicly reported on as having been rejected simply adds to this problem.

Now, Robin poses the idea of the possible efficiency of "raising the bar." This argument has been floating around for a number of other proposals (or realities) as well. Thus, Ofer Azar has published papers arguing that the steadily increasing average time of first response by journals in economics is potentially efficient on these grounds, raising the bar for authors who misjudge the appropriate level of journal to which they should be submitting. Of course, this is the opposite of what is done in the natural sciences, where rapid turnarounds and publicatin are strongly emphasized, and most authors do not like such slow turnarounds (just as I suspect most would not like public revelation of their rejections).

Another way this is done is that certain journals have very high submission fees that are then refunded to authors whose papers are accepted, $650 at the Journal of Financial Economics and a high one also at the Journal of Monetary Economics (both dominated by the sorts of models that are now looking very stupid and useless in the wake of the recent crises in the financial markets). They make this efficiency argument, although it tends to weed out anybody submitting who does not buy into their somewhat distorted view of things (and in the natural sciences one tends to find just the opposite, with there being no submission fees, but then often page fees for authors who are getting published).

Barkley, I am not trying to improve the journal system here; I'm trying to understand human nature. Why do some forums where folks compete for awards hide the identities of losers, while other forums reveal them? This seems an interesting and potentially insightful data point. Yes, high refunded-if-accepted submission fees are a similar phenomena.


Your main examples are sporting events and contests versus publishing academic research. I do not think the matter really has to do with individual incentives as much as it doew with the structure and nature of the respective activities. Thus, for sports, the contest between the rival participants is the activity, both when it is amateur and when it is professional, whereas for academic published research the bottom line is publicizing appropriately the contents of the "winner's" paper, not the contest between it and other papers not published.

This is clearest for the case of professional sports, where the outcomes involve peoples' livelihoods, just as in academia, who gets to publish and where affects the livelihoods of those involved, especially for junior faculty atempting to get tenure in an academic department. The money in professional sports comes from audiences paying to watch the contest between the contestants. Certainly there come to be greater financial rewards from those who win more often than those who do not (and are perceived as playing well). But even those who lose in the games, the players are providing a service for the spectators and thus deserve to be paid, at least somewhat. If there were no losers, there would be no games.

In academic research, if there were enough pages in journals available, all research could be published, assuming that somebody is keeping an eye out to make sure that totally false garbage does not get through. Much of what goes on in most fields is that there are page limits for each journal on how much can be published. Thus, the high rejection rates one sees for more highly ranked economics journals do not mean that those who "lose" are doing anything bad or awful. Most papers rejected from the journal I edit are not all that bad, indeed I would say only about 5% of submissions are just awful and totally unpublishable anywhere. Most papers submitted have some sort of original idea and are not completely incompetent. So, it is a matter of selecting the best papers to publish out of a set of possible ones. The point for the "spectators," the readers of a journal, is seeing the papers that make it through the selection process to get published.

There is also another problem not mentioned so far: copyrights. It might be one thing to print titles of papers and names of authors (and perhaps of editors and referees). But, the minute one starts talking about actually "publishing" the papers themselves (as some thinking about this think is what should be done) one runs into ruining the chance of a paper ever being properly published because of copyright. Also, if a paper is rejected, does one "publish" the first draft, the last revision? Does one publish the desk rejects? Does one reveal to the public all these details, that this paepr was desk rejected while that one went through four revisions before finally getting the axe?

Barkley, I only suggested publishing titles/authors/dates. You say for sports "the contest ... is the activity" but "for academic .. the bottom line is ... the contents of the ... paper" but to me that just seems another way to rephrase the question. I see both industries as ultimately allowing customers to affiliate with certified impressive folks, so the issue is about different ways to sort/certify impressiveness.

But in a sports contest, especially ones involving teams, many of those doing the affiliating are affiliating with the losing team, and continue to do so, usually, even if the team loses. They are paying to attend to see their team compete with the hope that they are going to win, or at least play well.

Those reading journal articles, especially those purchasing subscriptions to the journal, are doing so to read the articles published in the journal, or perhaps to affiliate with the journal. But do these people think about "affiliating" with the authors or titles of papers rejected for publication? Certainly not in the same way that fans affiliate with a losing team, or in the case of more individualized sports such as tennis or golf, with a losing player.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30