Untangling biases and nuances in double-blind peer review at scale

It’s important to get reviewing right, and remove as many biases as we can. We had a discussion about how to do this in COLING, presented in this blog post in interview format. The participants are the program co-chairs, Emily M. Bender and Leon Derczynski.

LD: How do you feel about blindness in the review process? It could be great for us to have blindness in a few regards. I’ll start with the most important to me. First, reviewers do not see author identities. Next, reviewers do not see each other’s identities. Most people would adjust their own review to align with e.g. Chris Manning’s (sounds terribly boring for him if this happens!). Third, area chairs do not see author identities. Finally, area chairs do not see reviewer identities in connection to their reviews, or a paper. But I don’t know how much of this is possible within the confines of conference management. The last seems the most risky; but reviewer identities being hidden from each other seems like a no-brainer. What do you think?

Reviewers blind from each other

EMB: It looks like we have a healthy difference of opinion here 🙂 Absolutely, reviewers should not see author identities. With them not seeing each other’s identities, I disagree. I think the inter-reviewer discussion tends to go better if people know who they are talking to. Perhaps we can get the software to track the score changes and ask the ACs to be on guard for bigwigs dragging others to their opinions?

LD: Alright, we can try that; but after reading that report from URoch, how would you expect PhD students/postdocs/asst profs to have reacted around a review of Florian Jaeger’s, if they’d had or intended to have any connection with his lab? On the other side, I hear a lot from people unwilling to go against big names, because they’ll look silly. So my perception of this is that discussion goes worse when people know who they’re contradicting—though reviews might end up being more civil, too. I still think big names distort reviews here despite getting reviewing wrong just as often as the small names, so having reviewers know who each other are makes for less fair reviewing.

EMB: I wonder to what extent we’ll have ‘big names’ among our reviewers. I wonder if we can get the best of both worlds though by revealing all reviewers names to each other only after the decisions are out. So people will be on good behavior in the discussions (and reviews) knowing that they’ll be associated with their remarks eventually, but won’t be swayed by big names during the process?

LD: Yes, let’s do this. OK, what about hiding authors from area chairs?

Authors and ACs

EMB: I think hiding author identities from ACs is a good idea, but we still need to handle conflicts-of-interest somehow. And the cases where reviewers think that the authors should be citing X previous work when X is actually the author’s. Maybe we can have some of the small team of “roving” ACs doing that work? I’m not sure how they can handle all COI checking though.

LD: Ah, that’s tough. I don’t know too much about how the COI process typically works from the AC side, so I can’t comment here. If we agree on the intention—that author identities should ideally be hidden from ACs—we can make the problem better-defined and share it with the community, so some development happens.

EMB: Right. Having ACs be blind to authors is also being discussed in other places in the field, so we might be able to follow in their footsteps.

Reviewers and ACs

LD: So how about reviewer identities being hidden from ACs?

EMB: I disagree again about area chairs not seeing reviewer identities next to their reviews. While a paper should be evaluated solely on its merits, I don’t think we can rely on the reviewers to get absolutely everything into their reviews. And so having the AC know who’s writing which review can provide helpful context.

LD: I suppose we are choosing ACs we hope will be strong and authoritative about their domain. Do you agree there’s a risk of a bias here? I’m not convinced that knowing a reviewer’s identity helps so much—all humans make mistakes with great reliability (else annotation would be easier), and so what we really see is random effect magnification/minimization depending on the AC’s knowledge of a particular reviewer, where a given review’s quality varies on its own.

EMB: True, but/and it’s even more complex: The AC can only directly detect some aspects of review quality (is it thorough? helpful?) but doesn’t necessarily have the ability to tell whether it’s accurate. Also—how are the ACs supposed to do the allocation of reviewers to papers, and do things like make sure those with more linguistic expertise are evenly distributed, if they don’t know who the reviewers are?

LD: My concern is that ACs will have bias about which reviewers are “reliable” (and anyway, no reviewer is 100% reliable). However, in the interest of simplicity: we’ve already taken steps to ensure that we have a varied, balanced AC pool this iteration, which I hope will reduce the effect of AC:reviewer bias when compared to conferences with mostly static AC pools. And the problem of allocating reviews to papers remains unsettled.

EMB: Right. Maybe we’re making enough changes this year?

LD: Right.

Resource papers

LD: An addendum: this kind of blindness may prove impossible for resource-type papers, where author anonymity may become an optionally relaxable constraint.

EMB: Well, I think people should at least go through the motions.

LD: Sure—this makes life easier, too. As long as authors aren’t torn apart during review because someone can guess the authors behind a resource.

EMB: Good point. I’ll make a note in our draft AC duties document.

Reviewing style

LD: I want to bring up review style, as well. To nudge reviewers towards good reviewing style, I’d like reviewers to have the option of signing their reviews, with signatures available to authors at notification only. The reviewer identity would not be attached to a specific review, but rather general, in the form “Reviewers of this paper included: Natalie Schluter.” We known adversarial reviewing drops when reviewer identity is known, and I’d love to see CS—a discipline known for nasty reviews—begin to move in a positive direction. Indeed, as PC co-chairs of a CS-related conference, I feel we in particular have a duty to address this problem. My hope is that I can write a script to add this information, if we do it.

EMB: If the reviewers are opting in, perhaps it makes more sense for them to claim their own reviews. If I think one of my co-reviewers was a jerk, I would be less inclined to put my name to the group of reviews.

LD: That’s an interesting point. Nevertheless I’d like us to make progress on this front. In some time-rich utopia it might make sense to have the reviewers all agree whether or not to sign all three, and only have their identities revealed to each other after that—but we don’t have time. How about, reviews may be signed, but only at the point notifications are sent out? This prevents reviewers knowing who each other is, and lets those who want to hide, do so—as well as protecting us all from the collateral damage that results from jerk reviewers.

This could work with a checkbox—”Sign my review with my name in the final author notification”—and the rest’s scripted in Softconf.

EMB: So how about option to sign for author’s view (the checkbox) + all reviewers revealed to each other once the decisions are done?

LD: Good, let’s do that. Reviewer identities are hidden from each other during the process, and revealed later; and reviewers have the option to sign their review via a checkbox in softconf.

EMB: Great.

Questions

What do you think? What would you change about the double-blind process?

Writing Mentoring Program

Among the goals we outlined in our inaugural post was the following:

(1) to create a program of high quality papers which represent diverse approaches to and applications of computational linguistics written and presented by researchers from throughout our international community;

One of our strategies for achieving this goal is to create a writing mentoring program, which takes place before the reviewing stage. This program is focused on helping those who perhaps aren’t used to publishing in the field of computational linguistics, are early in their careers, and so on. We see mentoring as a tool that makes COLING accessible for broader range of high-quality ideas. In other words, this isn’t about pushing borderline papers into acceptance but rather alleviating presentational problems with papers that, in their underlying research quality, easily make the high required standard.

In order for this program to be successful, we need buy-in from prospective mentors. In this blog post, we provide the outlines of the program, in order to let the community (including both prospective mentors and mentees) know what we have in mind and to seek (as usual) your feedback.

We plan to run the mentoring program through the START system, as follows:

  • Anyone wishing to receive mentoring will submit an abstract by 4 weeks before the COLING submission deadline. Authors will be instructed that submitting an abstract at this point represents a commitment to submit a full draft by the mentoring deadline and then to submit to COLING.
  • Requesting mentoring doesn’t guarantee receiving mentoring and receiving mentoring doesn’t guarantee acceptance to the conference program.
  • Any reviewer willing to serve as mentor will bid on those abstracts and indicate how many papers total they are willing to mentor. Mentors will receive guidance from the program committee co-chairs on their duties as mentors, as well as a code of conduct.
  • Area chairs will assign papers to mentors by 3 weeks before the submission deadline, giving priority as follows. (Note that if there are not enough mentors, not every paper requesting mentoring will receive it.)
    1. Authors from non-anglophone institutions
    2. Authors from beyond well-represented institutions
  • Authors wishing to receive mentoring will submit complete drafts via START by 3 weeks before the submission deadline.
  • Mentors will provide feedback within one week, using a ‘mentoring form’ created by the PCs structured to encourage constructive feedback.
  • No mentor will serve as a reviewer for a paper they were mentor of.
  • Mentor bidding will be anonymous, but actual mentoring will not be (in either direction).
  • Mentors will be recognized in the conference handbook/website, but COLING will not indicate which papers received mentoring (though authors are free to acknowledge mentorship in their acknowledgments section).

As a starting point, here are our initial questions for the mentoring form:

  • What is the main claim or result of this paper?
  • What are the strengths of this paper?
  • What questions do you have as a reader?  What do you wish to know about the research that was carried out that is unclear as yet from the paper?
  • What aspect of the paper do you think the COLING audience will find most interesting?
  • Which paper category/review form do you think is most appropriate for this paper?
  • Taking into consideration the specific questions in that review form, in what ways could the presentation of the research be strengthened?
  • If you find places where there are grammatical or stylistic issues in writing, or in general, if you think certain improvements are possible in terms of overall organization and structure, please indicate these. It may be most convenient to do so by marking up a pdf with comments.

Regarding code of conduct, by signing up to mentor a paper, mentors agree to:

  • Maintain confidentiality: Do not share the paper draft or discuss its contents with others (without express permission from the author).  Do not appropriate the ideas in the paper.
  • Commit to prompt feedback: Read the paper and provide feedback via the form by the deadline specified.
  • Be constructive: Avoid sarcastic or harsh evaluative remarks; phrase feedback in terms of how to improve, rather than what is wrong or bad.

The benefits to authors are clear: Authors participating in the program will benefit because they will receive feedback on the presentation of their work, which if heeded, might also improve chances of acceptance as well as enhance the impact of the paper once published. Perhaps the benefits to mentors are more in need of articulation. Here are the benefits we see: Mentors will be recognized through a listing in the conference handbook and website, with outstanding mentors receiving further recognition. In addition, mentoring should be rewarding for the mentors because the exercise of giving constructive feedback on academic writing provides insight into what makes good writing. Finally, the mentoring program will benefit the entire COLING audience through both improved presentation of research results and improved diversity of authors included in the conference.

Our questions for our readership at this point are:

  1. What would make this program more enticing to you as a prospective mentor or author?
  2. As a prospective mentor or author, are there additional things you’d like to see in the mentoring form?
  3. Are there points you think we should add to the code of conduct?

 

What kinds of invited speakers could we have?

As we begin to plan the keynote talks for COLING, we are looking for community input.  The keynote talks, among the few shared experiences in a conference with multiple parallel tracks, serve to both anchor the ‘conversation’ that the field is having through the conference and push it in new directions. In the past, speakers have been from both close to the center of our community and from outside it, lending both new, important perspectives that contextualize COLING, as well as helping us hear stories and insights that have led to great successes.

We are seeking two kinds of input:

  1. In public in the comments on this post: What kinds of topics would you like to hear about in the invited keynotes? We’re interested in both suggestions within computational linguistics as well as specific topics from related fields: linguistics, machine learning, cognitive science, and applications of computational linguistics to other fields.
  2. Privately, via this web form: If you have specific speakers you would like to nominate, please send us their contact info and any further information you’d like to share.