Here's another in my series of rants about how we should change the academic world -- paper reviews. Although some people claim they like reviewing papers, I seem to be a receptacle for evaluating crappy ones. I therefore do not enjoy it!
Currently, papers are reviewed mostly as follows: after it is submitted, a "program committee" who can see the authors' names decides who are the best N people to review it (taking into account area of expertise, conflicts of interest, etc.); the reviewers write a review and remain anonymous forever; a decision based on these reviews is made on whether to accept the paper or not.
The problem I see is that there is very little incentive to write high-quality reviews. Heck, there is very little incentive to even review a paper at all because to a large extent reviewers get zero credit. Unless you are a member of the program committee, your name is usually not even posted anywhere. This, combined with the fact that most submitted papers are not very good, makes me not want to review at all.
So here's what I propose: High-quality reviews should be published. If the review is positive and explains why the paper is of importance, it should be published along with the paper (some journals like Science and Nature are already doing something similar). If the review is negative and gives a non-trivial reason of why the paper should not be published (e.g., a clever break of a cryptosystem, a little-known fact that makes a study useless, etc.), the review should be published instead of the paper. (This should only be done with papers that seem like good ideas at first, for which the reviewer found a subtle but critical flaw.)
Oh, and stop sending me lame papers, please.
Sunday, March 1, 2009
Subscribe to:
Post Comments (Atom)
But, who will review the reviewers?
ReplyDeleteThe editors or program committee members.
ReplyDeleteGenerally who are the original N people who review the paper?? Are they not grad students of the pc people??
ReplyDeleteUsually they are not the grad students of the pc member.
ReplyDeletelame papers=badly written papers or decently written but with no useful content.
ReplyDeleteIf it is the case of badly written papers any useful suggestions.
It's both, although I can live with badly written ones.
ReplyDeleteYes, a lot of them actually are grad students -- inofficially. In the labs I know, the postdocs delegate their reviews to the grad students (which is ok, because the grad students learn how to write a review if the postdoc is going through the reviews with them).
ReplyDeleteI would like to punish authors of really bad or badly written papers. It seems that some people just try and send crappy, unfinished, badly written stuff -- and if they do so with a few papers, some will get through by chance. Instead, I would like the people to encourage to write carefully instead of just trying to increase their paper count.
It's the same with conference talks: I suggest that first, authors who constantly send crappy papers should be banned (or have to pay for reviews), and second that papers of people who give bad (e.g., unprepared) talks are withdrawn.
How about establishing the Journal of Second-rate Junk? Whoever submits a paper to a top conference or journal would have to agree that if two referees thought the submission was offensively bad (not just below the bar, but so bad that it should never have been submitted in the first place), then it would be published instead in the Journal of Second-rate Junk. This would publicize the bad submission and furthermore keep the authors from publishing it elsewhere. To prevent abuses, the names of the two referees and the supervising editor/PC member would be published there as well.
ReplyDeleteLuis, not sure if you're serious about this one or just trolling. I'll cautiously assume that you're serious.
ReplyDeleteThe incentive system for review writing in academic computer science seems to be based entirely on reciprocity / collegiality. When I write reviews, it is generally in response to program committee members I know personally. Likewise, if I ask someone to review a paper, it's also someone I know.
You might want to take a look at Is It Worth while to Pay Referees? in the Southern Economic Journal.
What I can say is this: the cost of publicly offending someone is high. I suspect that reviewers would not be eager to go on record rejecting papers by people in their peer group. Besides, given that research contribution is the real currency in academia, what good would publishing reviews do for an academic striving to obtain tenure or promotion?
I suspect the only viable incentives are reciprocity and cash. And it's not clear that cash would be an improvement.
How about establishing the Journal of Second-rate Junk??
ReplyDeleteI love it. Let's do it!
Daniel,
ReplyDeleteYou make good points.
Negative reviews would only be published if the reviewer gives consent, and surely would not be about papers that are totally crappy -- I, for one, would have no problem if my negative reviews were published when I think they add to the literature (however, in most cases my negative reviews don't actually add anything to the literature).
I guess the incentive of reciprocity is not quite enough for me to spend time carefully reviewing papers...
Cash is probably useless for me as well. For things I don't want to do (like reviewing papers!), I value my time at a few hundred dollars an hour, and I doubt journals or conferences would be willing to pay for that.
Here is a thought: encourage / require people to post papers online *before* they are evaluated for inclusion in conferences or journals. Then open those posted papers to comments, encouraging / requiring the comments to be non-anonymous. Then make those comments a resource for reviewers. That should:
ReplyDelete- Vastly reduce the work of reviewers.
- Allow informal reviewers to cultivate a reputation for their good judgment.
- Ultimately lead to a system where those who choose to review papers and are good at it earn at least a social currency for their documented accomplishment.
I also imagine that bad papers simply won't attract reviews from credible reviewers--except from those who are unconcerned with (or even enjoy) publicly signing harshly critical reviews.
This comment has been removed by the author.
ReplyDeleteDonald Geman has an interesting suggestion - abolish conference papers.
ReplyDeleteI will post here my favourite part of his suggestion:
" My own (half-serious) suggestion is to limit everybody to twenty lifetime papers. That way we would think twice about spending a chip. And we might actually understand what others are doing - even be eager to hear about so-and-so's #9 (or #19). "
The 10 reasons to abolish conference papers:
http://www.cis.jhu.edu/publications/papers_in_database/GEMAN/Ten_Reasons.pdf
I like the idea of a maximum number of papers, and have been talking about that to people here. Unfortunately, it doesn't seem to have many fans.
ReplyDeletePerhaps it would suffice to publicly emphasize statistics about a paper's readership, e.g., based on logs from objective observers like the ACM and IEEE? If enough people felt that producing a mediocre conference paper was bad ROI, then perhaps they'd self-censor better. And I can't imagine such a system working without numbers that can be bandied around at tenure and promotion committee meetings.
ReplyDeleteI'm pretty sure there is a pattern to both badly written and terrible papers. If we had enough members in the committee teaching a computer which ones are bad or terrible, maybe the computers could recognize it and just put it in spam the next time it sees some pattern. Maybe that would be good enough to make a journal of second rate junk.
ReplyDeleteHow about taking an idea from professional sports and having each conference have a "major league" and "minor league" variant? You don't get to submit to the major league conference unless a majority of the authors have had a paper accepted to the minor league conference in the last N years.
ReplyDeleteBTW, a beef from the other side: negative reviews with no content. People who submit reviews like this should be banned:
ReplyDeleteRelevance: 7 (out of 10)
Significance: 3
Originality: 5
Quality: 6
Exposition: 9
Appreciation of related work: 7
Overall recommendation: Strong reject
Re: the Journal of Second-Rate Junk -- these already exist; they usually have names like "the European Conference on X" or "The Latin American Conference on X" or "The Asian Conference on X".
ReplyDeleteNow, I have nothing against Europeans or Latin Americans or Asians... it's just that, in my experience, most papers published at these conferences are those that already got rejected from the more prestigious International Conference on X. And due to the "local" flavor, often the reviewers and submitters don't natively speak English, which can lead to some truly terrible-to-read papers getting accepted.
Occasionally one of these local conferences outgrows its shoes and accidentally becomes a world-class conference, but this seems to be relatively rare.
I would like to speak up in defense of one of these conferences: the European Conference on Information Retrieval (ECIR). I can at least say that ECIR 2008 was an outstanding conference.
ReplyDeleteBut I agree in general that regional conferences are usually second-tier.
Hi Luis. Inspired by the tech report I Sent you earlier, here is what I propose, for a prefiltering system of conference submissions:
ReplyDelete1) Submission:
- each submitting author gets in return 2 papers to pre-review in one week, and then 6 reviews to review, within an additional week.
- Among the 2 papers to review, one is already known to be inacceptable or acceptable, and one is not.
- Among the 6 reviews, 3 are already known to be acceptable or not.
2) Selection for review by the PC:
The PC members filter the submissions based on (a score computed from)
- how much the review of the "challenge" paper agrees with what was already known about the acceptability of this paper;
- how much the validation of the three "challenge" reviews agrees with what was already known;
- how the paper submission was scored by peer submitting authors.
3) Discussion
- The additional work asked to the submitting authors make the system (more) scalable (than current systems). When performing their work, the members of the program committee can use the validated review, in addition to required reviews from their chosen experts.
- whereas one could imagine a system based on a majority vote from the peers, I prefer the idea of challenges, more suitable for textual evaluation. Paper known to be unacceptable can be taken from last-year pool of clear rejection. Papers known to be acceptable are more difficult to find I guess.
- I would prefer a more sophisticated system which would keep track over year of who did which review, and of how much his opinion agrees with others, and more incentive to give good reviews, but things will change only progressively.
Publishing reviews would also make sure that the authors pay more attention to suggested changes for the camera-ready version.
ReplyDeleteAnd to folks who are new to a research area; it could also help to know what exactly are those unseen reviewers looking for and write better papers (say better evaluation). This could lead to people trying to game the system, but if you can game a conference, why publish there?
Maybe you should make rebuttals public too (to make it less one-sided).
Hi Luis, what you are discussing here reminds me of "public peer review", as already in use with journals of the European Geosciences Union (see, for instance, http://www.biogeosciences.net/review/index.html ). It is outlined in some detail at http://www.researchinformation.info/risepoct04openaccess.html , while a shorter description is at http://biology.plosjournals.org/perlserv/?request=read-response&doi=10.1371/journal.pbio.0050107&ct=1#r1632 .
ReplyDeleteI can only image the volume of email you receive from individuals with great ideas looking for reviews. It is the prestige of Carnegie Mellon which draws people like myself to acquire dialog with professors and PhD students.
ReplyDeleteWhat you may not see is the energy and passion that individuals put into developing these ideas. It is a shame that a majority of these ideas go untapped do to the lack of an audience.
The Do Good Gauge is an abstract to provide a democratic forum for intelligent argument. Natural Language Processing is one of many academic fields of study relevant to this abstract.
As far as given reviewers credit, the Do Good Gauge is in its infancy. I'm looking for co-authors to help develop a book of the abstract. If someone is looking for their name listed first on the title page, send me an email.
Scott Nesler
The Do Good Gauge
http://www.dogoodgauge.com
I agree that reviewing should be rewarded. Paying reviewers is one option, but if academic prestige depended on the goodness of one's own reviews, we wouldn't need to involve money. Sorry to repeat myself, but one different option is:
ReplyDeletehttp://etuttounmagnamagna.blogspot.com/2009/03/whats-wrong-with-scholarly-publication.html
S. (Stefano Mizzaro)
Generally, who are the original people who review the paper? Are they not grad students of the pc peoples?
ReplyDelete