Comments on: Untangling biases and nuances in double-blind peer review at scale http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/ August 20-26, 2018, Santa Fe, New Mexico, USA Wed, 05 Sep 2018 02:15:47 +0000 hourly 1 https://wordpress.org/?v=4.9.10 By: Philippe Muller http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-2071 Wed, 04 Apr 2018 11:24:11 +0000 http://coling2018.org/?p=235#comment-2071 Isn’t the argument about “reviewers blind to each other” also valid for not signing reviews ? Signing reviews would de-incentivize honest negative reviewing of more “powerful” authors (who are the easiest to identify) for the same reason.

]]>
By: Chu-Ren Huang http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-327 Thu, 28 Sep 2017 06:35:14 +0000 http://coling2018.org/?p=235#comment-327 Yes, indeed very comprehensive discussion on whether reviewers identities should be disclosed. It seems to me that the different perspectives variations very often depend on implementation and on the gives and takes of benefits. There is, however, less discussion on the blindness of submission. This in fact is currently, from my view, a very serious threat to the future of our field. Let’s start with a very simple principle of academic ethics that I hope most of us can agree on:

A citable paper is not anonymous.

Based on this, the logical conclusion is that non-anonymous review is incompatible with acceptance of papers posted on arXiv and other social media etc. Please think logically, making wrong decision on this will seriously endanger CL’s reputation as a field.

]]>
By: Emily M. Bender http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-326 Thu, 28 Sep 2017 02:55:20 +0000 http://coling2018.org/?p=235#comment-326 I’m curious to the answers here, too! Maybe it’s useful for the community to see what the reviewers thoughts the merits of the paper were/what they were skeptical about?

]]>
By: Emily M. Bender http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-325 Thu, 28 Sep 2017 02:53:57 +0000 http://coling2018.org/?p=235#comment-325 Thanks, Hal — I hadn’t get encountered the motivation you cite for keeping reviewers anonymous to each other. (I’ve always been personally irked when that happens, because I like to know who I’m talking to!) But that is a really important angle to consider.

I imagine that our proposal (reviewers’ names revealed to each other at the end) won’t help in this case, because someone who feels that way will likely worry that the person they’re disagreeing with is powerful. Would you agree?

Re benefitting reviewers — in a sense we are trying to figure out ways to benefit reviewers, to entice better reviewing/more effort out of them, because they get ‘more’ for it. But I can see that maybe this isn’t an effective move in that direction.

]]>
By: Emily M. Bender http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-324 Thu, 28 Sep 2017 02:50:09 +0000 http://coling2018.org/?p=235#comment-324 Thank you for engaging in discussion with us!

I strongly disagree with one of your premises, though — while there are overt/conscious acts of bias, there is also unconscious bias (which tends to favor well-known researchers, well-known labs, and people in dominant demographics). While we can, at least to a certain extent, rely on the scruples of reviewers to behave accordingly if we ask them not to go looking for preprints on arXiv or elsewhere, even the most well-meaning reviewers can’t effectively account for unconscious bias.

]]>
By: Emily M. Bender http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-323 Thu, 28 Sep 2017 02:37:46 +0000 http://coling2018.org/?p=235#comment-323 Thank you for your comments — I think one thing we could be clearer about is what is signed when, and for whom. The proposal in the blog post (at least my understanding!) was that the reviews would be anonymous to the author & to the other reviewers until the decisions were made (but known to the AC). At the end of the process (post-decision), reviewer names would be revealed to co-reviewers, and, if a reviewer so chose, to the author.

]]>
By: Ron Artstein http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-319 Wed, 27 Sep 2017 17:16:25 +0000 http://coling2018.org/?p=235#comment-319 If the author was diligent in addressing reviewer comments, then parts of the review will be irrelevant in conjunction with the published paper. What’s the point of publishing those? Is there value to the (short) editing history of the paper?

Maybe edited reviews can be useful, but this puts yet another burden on reviewers.

]]>
By: Ron Artstein http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-317 Wed, 27 Sep 2017 17:01:03 +0000 http://coling2018.org/?p=235#comment-317 The main reason for keeping reviewers anonymous is to allow them to be critical without fear of repercussion. And because each reviewer gets a tiny sample of papers to review, there’s a non-negligible chance that they get a batch of not great papers. Requiring reviewers to sign one review in order to get recognized creates a perverse incentive to give a positive review to at least one paper in the batch even when none deserve it — which is the exact opposite of encouraging quality reviews.

I think Pullum’s point about publishing the names of accepting referees is to put some pressure against giving favorable reviews to shoddy work. This might be less of a problem at a conference like COLING, where historical acceptance rates are fairly low to begin with, and the problem is more rejection of good work than acceptance of bad work.

I agree that reviewer stress and time compression is a big problem (I definitely suffer from it), and I think the only solution is less reviewing. This requires a community-wide effort and cannot be handled on a conference-by-conference basis. Unfortunately the trend is for each conference to place more and more burdens on reviewers, so a person’s only recourse is to opt out of program committees.

]]>
By: Hal Daumé III http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-313 Wed, 27 Sep 2017 11:52:27 +0000 http://coling2018.org/?p=235#comment-313 Very nice discussion, thanks! I agree with many things that have been said here and in the comments. Some points I don’t think have been raised.

Re reviewers blind from each other: I’ve heard from several people (at least four) that their willingness to disagree with a powerful reviewer (eg someone senior who might review their grant proposals at some point) is often quite close to zero when they know the powerful reviewer will know who they are. FWIW, All those who have mentioned this to me come from underrepresented/historically-excluded populations in the NLP/CL/ML community.

Re signing reviews: I’m not entirely sure what the motivation is here. It’s true I’m more likely to sign my review if I’m confident in it (and probably if it’s positive [and probably also if I’m in some position of authority]) but the general argument seems to get the causality backwards here. In general I’m not a huge fan of “opt in” things because people will naturally opt in iff it benefits them in some way, and I don’t think benefiting reviewers is what you’re trying to solve here.

One related question, not exactly on blindness, is the question of whether reviews are made public for accepted paper. I personally think this is a really nice practice.

]]>
By: Nathan Schneider http://coling2018.org/untangling-biases-and-nuances-in-double-blind-peer-review-at-scale/#comment-309 Wed, 27 Sep 2017 02:06:03 +0000 http://coling2018.org/?p=235#comment-309 Does the evidence you cited come from opt-in signed reviewing? If it wasn’t opt-in I would expect that reviewers might feel compelled to put more effort in because they know they can’t hide.

One incentive to consider: Promise to recognize excellent reviewers, and stipulate that reviewers must sign at least one review in order to qualify.

Another experiment that would be interesting (not sure if it’s been done): ask authors when they submit reviews to self-assess thoroughness. (This is not necessarily the same thing as confidence: one could have low confidence because a paper is not within one’s area of expertise, yet still read it carefully and give thoughtful feedback.) I would guess that a) stressed reviewers are aware that their reviews aren’t particularly thorough, and b) knowing that self-assessed thoroughness will be taken into account by ACs will incentivize some reviewers to put in more effort.

]]>