Author responsibilities and the COLING 2018 desk reject policy

As our field experiences an upswing in participation, we have more submissions to our conferences, and this means we have to be careful to keep the reviewing process as efficient as possible. One tool used by editors and chairs is the “desk reject”. This is a way to filter out papers that clearly shouldn’t get through for whatever reason, without asking area chairs and reviewers to handle them, leaving our volunteers to use their energy on the important process of dealing with your serious work.

A desk reject is an automatic rejection without further review. This saves time, but is also quite a strong reaction to a submission. For that reason, this post clarifies possible reasons for a desk reject and the stages at which this might occur. It is the responsibility of the authors to make sure to avoid these situations.

Reasons for desk rejects:

  • Page length violations. The content limit at COLING is 9 pages. (You may include as many pages as needed for references.) Appendices, if part of the main paper, must be put into that nine pages. It’s unfair to judge longer papers against those that have kept to the limit and so exceeding the page limit means a desk reject.
  • Template cheating. The LaTeX and Word templates give a level playing field for everyone. Squeezing out whitespace, adjusting margins, and changing the font size all stop that playing field from being even and give an unfair advantage. If you’re not using the official template, you’ve altered that template, or the way a manuscript uses it goes beyond our intent, then the paper may be desk rejected.
  • Missing or poor anonymisation. It’s well-established that non-anonymised papers from “big name” authors and institutions fare better during review. To avoid this effect, and others, COLING is running double-blind; see our post on the nuances of double-blinding. We do not endeavour to be arbiters of what does or does not constitute a “big name”—rather, any paper that is poorly anonymised (or not anonymised at all) will face desk reject. See below for a few more comments on anonymisation.
  • Inappropriate content. We want to give our reviewers and chairs research papers to review. Content that really does not fit this will be desk rejected.
  • Plagiarism. Submitting work that has already appeared, has already been accepted for publication at another venue, or has any significant overlap with other works submitted to COLING will be desk rejected. Several major NLP conferences are actively collaborating on this.
  • Breaking the arXiv embargo. COLING follows the ACL pre-print policy. This means that only papers not published on pre-print services or published on pre-print services more than a month before the deadline (i.e. before February 16, 2018) will be considered. Pre-prints published after this date (non-anonymously) may not be submitted for review at COLING. In conjunction with other NLP conferences this year, we’ll be looking for instances of this and desk rejecting them.

The desk rejects are determined at four separate points. In order,

  1. Automatic rejection by the START submission system, which has a few checks at various levels.
  2. A rejection by the PC co-chairs, before papers are allocated to areas.
  3. After papers are placed in areas, ACs have the opportunity to check for problems. One response is to desk reject.
  4. Finally, during and immediately after allocation of papers to reviewers, an individual reviewer may send a message to invoke desk rejection, which will be queried and checked by at least two people from the ACs or PC co-chairs.

As an honest researcher trying to publish your important and exciting work, the above probably do not apply to you. But if they do, please think twice. We would prefer to send out no desk rejects and imagine it would be much more pleasant for our authors if none were to receive a desk reject. So, now you know what to avoid!

Postscript on anonymisation

Papers must be anonymised. This protects everybody during review. It’s a complex issue to implement, which is why we earlier had a post dedicated to double blindness in peer review. There are strict anonymisation guidelines in the call for papers and the only way to be sure that nobody takes exception during the review process is to follow these guidelines.

We’ve received several questions on what the best practices for anonymisation are.  We realize that in long-standing projects, it can be impossible to truly disguise the group that work comes from.  Nonetheless, we expect all COLING authors to follow the forms of anonymisation:

  1. Do NOT include author names/affiliations in the version of the paper submitted for review.  Instead, the author block should say “Anonymous”.
  2. When making reference to your own published work, cite it as if written by someone else: “Following Lee (2007), …” “Using the evaluation metric proposed by Garcia (2016), …”
  3. The only time it’s okay to use “anonymous” in a citation is when you are referring to your own unpublished work: “The details of the construction of the data are described in our companion paper (anonymous, under review).”
  4. Expanded versions of earlier workshop papers should rework the prose sufficiently so as not to turn up as potential plagiarism examples. The final published version of such papers should acknowledge the earlier workshop paper, but that should be suppressed in the version submitted for review.
  5. More generally, the acknowledgments section should be left out of the version submitted for review.
  6. Papers making code available for reproducibility or resources available for community use should host a version of that at a URL that doesn’t reveal the authors’ identity or  institution.

We have been asked a few times about whether LRE Map entries can be done without de-anonymising submissions.  The LRE Map data will not be shared with reviewers, so this is not a concern.

Keeping resources anonymised is a little harder. We recommend you keep things like names of people and labs out of your code and files; for example, Java code uploaded that ran within an edu.uchicago.nlp namespace would be problematic. Similarly, if the URL given is within a personal namespace, this breaks double-blindness, and must be avoided. Google Drive, Dropbox and Amazon S3 – as well as many other file-sharing services – offer reasonably anonymous (and often free) file sharing URLs, and we recommend you use those if you can’t upload your data/code/resources into START as supplementary materials.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *