COI policy

With a conference the size of COLING, managing conflicts of interest in the reviewing process is a challenge. Below, we sketch our COI handling policy, in the interest of transparency. In all cases, our goals are to maintain the fairness and integrity of the double-blind review process while also making sure that our hard-working volunteer program committee members can still also submit to COLING.

reviewer <> author

Softconf will automatically COI any reviewers from the same institution as an author.  In addition, in the bidding phase, we will ask reviewers to indicate any COIs that were not automatically caught. When ACs match reviewers to papers, this will be done so as to avoid COIs.

AC <> author

Any paper for which an AC has a COI (beyond being simply sharing affiliation) will be handled by rerouting the paper to another area. Here we’re talking about papers authored by ACs, their students, or other close collaborators. Since the areas are emergent (rather than strictly defined a priori) we anticipate it being relatively straightforward to find the area that is the next best match for all such papers. For papers authored by ACs themselves, this is relatively straightforward to detect. Beyond that, we will be asking ACs to identify any such COI papers as we assign papers to areas.

PC chair <> author

Perhaps the trickiest case is COIs involving the PC co-chairs ourselves. Neither of us are submitting our own papers to the COLING main conference. (Workshops, being handled entirely separately, are fair game in principle.) However, the fact that we’ve taken on this role shouldn’t prevent our students and other close collaborators from submitting to COLING. In this case, the entire process (including assignment to an area or possibly the “Special Circumstances” ACs, assignment to reviewers, and final accept/reject decision) will be overseen by our counterpart in conjunction the General Chair, Pierre Isabelle. This way, we still ensure that two people are present at every level of chairing.

Who gets to author a paper? A note on the Vancouver recommendations

At COLING 2018, we require submitted work to follow the Vancouver Convention on authorship – i.e. who gets to be an author on a paper. This guest post by Željko Agić of ITU Copenhagen introduces the topic.

Who gets to author a paper? A note on the Vancouver recommendations

One of the basic principles of publishing scientific research is that research papers are authored and signed by researchers.

Recently, the tenet of authorship has sparked some very interesting discussions in our community. In light of the increased use of preprint servers, we have been questioning the *ACL conference publication workflows. These mostly had to do with the peer review biases, but also with authorship: Should we enable blind preprint publications?

The notion of unattributed publications mostly does not sit well with researchers. We do not even know how to cite such papers, while we can invoke entire research programs in our paper narratives through a single last name.

Authorship is of crucial importance in research, and not just in writing up our related work sections. This goes without saying to all us fellow researchers. While in everyday language an author is simply a writer or an instigator of a piece of work, the question is slightly more nuanced in publishing scientific work:

  • What activities qualify one for paper authorship?
  • If there are multiple contributors, how should they be ordered?
  • Who decides on the list of paper authors?

These questions have sparked many controversies over the centuries of scientific research. An F. D. C. Willard, short for Felis Domesticus Chester, has authored a physics paper, similar to Galadriel Mirkwood, a Tolkien-loving Afgan hound versed in medical research. Others have built on the shoulders of giants such as Mickey Mouse and his prolific group.

Yet, authorship is no laughing matter: It can make and break research careers, and its (un)fair treatment can make a difference between a wonderful research group and an uneasy one at the least. A fair and transparent approach to authorship is of particular importance to early-stage researchers. There, the tall tales of PhD students might include the following conjectures:

  • The PIs in medical research just sign all the papers their students author.
  • In algorithms research the author ordering is always alphabetical.
  • Conference papers do not make explicit the individual author contributions.
  • The first and the last author matter the most.

The curiosities and the conjectures listed above all stem from the fact that there seems to be no awareness of any standard rulebook to play by in publishing research. This in turn gives rise to the many different traditions in different fields.

Yet, there is a rulebook!

One prominent attempt to put forth a set of guidelines for determining authorship are the Vancouver Group recommendations. The Vancouver Group are the International Committee of Medical Journal Editors (ICMJE), who in 1985 introduced a set of criteria for authorship. The criteria have seen many updates over the years, to match the latest developments in research and publishing. Their scope far surpasses the topic of authorship, and spans across the scientific publication process: reviewing, editorial work, publishing, copyright, and the like.

While the recommendations do stem from the medical field, they are nowadays broadened and thus widely adopted. The following is an excerpt from the recommendations in relation to authorship criteria.

The ICMJE recommends that authorship be based on the following 4 criteria:

1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND

2. Drafting the work or revising it critically for important intellectual content; AND

3. Final approval of the version to be published; AND

4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

(…)

All those designated as authors should meet all four criteria for authorship, and all who meet the four criteria should be identified as authors. Those who do not meet all four criteria should be acknowledged.

(…)

These authorship criteria are intended to reserve the status of authorship for those who deserve credit and can take responsibility for the work. The criteria are not intended for use as a means to disqualify colleagues from authorship who otherwise meet authorship criteria by denying them the opportunity to meet criterion #s 2 or 3.

Note that there is an AND operator tying the four criteria, but there are some ORs within the individual entries. Thus, in essence, to be adherent with the Vancouver recommendations for authorship, one has to meet all four requirements, while in meeting each of the four, one is allowed to meet them minimally.

To take one example:

If you substantially contributed to 1) data analysis, and to 2) revising the paper draft, and then you subsequently 3) approved of the final version and 4) agreed to be held accountable for all the work, then congrats! you have met the authorship criteria!

One could take others routes through the four criteria, some arguably easier, while some even harder.

In my own view, we as a field should hope for the Vancouver recommendations to have already been adopted in NLP research, if only implicitly through the way our research groups and collaborations work.

Yet, are they? What are your thoughts? In your view, are the Vancouver recommendations well-matched with the COLING 2018 paper types? In general, are there aspects of your work in NLP that are left uncovered by the authorship criteria? Might there be at least some controversy and discussion potential to this matchup? 🙂

Metadata and COLING submissions

As the deadline for submission draws near, we’d like to alert our authors to a few things that are a bit different from previous COLINGs and other computational linguistics/NLP venues in the hopes that this will help the submission process go smoothly.

Paper types

Please consider the paper type you indicate carefully, as this will affect what the reviewers are instructed to look for you in your paper.  We encourage you to read the description of the paper types and especially the associated reviewer questions carefully. Which set of questions would you most like to have asked of your paper? (And if reading the questions inspires you to reframe/edit a bit to better address them before submitting, that is absolutely fair game!)

Emiel van Miltenburg raised the point on Twitter last week that it can be difficult to categorize papers and in particular that certain papers might fall between our paper types, combining characteristics of more than one, or being something else entirely.

Emiel and colleagues wondered whether we could implement a “tagging” system where authors could indicate the range of paper types their paper relates to. That is an intriguing idea, but it doesn’t work with the way we are using paper types to improve the diversity and rage of papers at COLING. As noted above, the paper types entail different questions on the review forms. We’re doing that because otherwise it seems that everything gets evaluated against the NLP Engineering Experiment paper type, which in turn means it’s hard to get papers of the other types accepted.  And as we hope we’ve made it blindingly clear, we really are interested in getting a broad range of paper types!

Keywords

The other aspect of our submission form that will have a strong impact on how your paper is reviewed is the keywords. Following the system pioneered by Ani Nenkova and Owen Rambow as PC co-chairs for NAACL 2016, we have asked our reviewers to all describe their areas of expertise along five dimensions:

  1. Linguistic targets of study
  2. Application tasks
  3. Approaches
  4. Languages
  5. Genres

(All five of these have a none of the above/not-applicable option.) The reviewers (and area chairs) all indicated all of the items on each of these dimensions they have the expertise and interest to review for. For authors, we ask you to indicate which items on each dimension best describe the paper you are submitting. Softconf will then match your paper to an area based on the assignment of papers to areas that best optimizes reviewer expertise for the papers submitted.

In sum: To ensure the most informed reviewing possible of your paper, please fill out these keywords carefully.  We urge you to start your submission in the system ahead of time so you aren’t trying to complete this task in a hurry just at the deadline.

Dual submission policy

Our Call for Papers indicates the following dual submission policy:

Papers that have been or will be under consideration for other venues at the same time must indicate this at submission time. If a paper is accepted for publication at COLING, it must be immediately withdrawn from other venues. If a paper under review at COLING is accepted elsewhere and authors intend to proceed there, the COLING committee must be notified immediately.

We have added a field in the submission form for you to be able to indicate this information.

LRE Map

COLING 2018 is participating in LRE map, as described in this guest post by Nicoletta Calzolari. In the submission form, you are asked to provide information about language resources your research has used—and those it has produced. Do not worry about anonymity on this form.  This information is not shared with reviewers.

Reviewer Code of Conduct

We ask the reviewers for COLING 2018 to adhere to the following code of conduct. (This has also been sent to the reviewers via email, but for transparency’s sake we post it here as well.)

Reviewer Code of Conduct

As you prepare your reviews, keep in mind that our goal with the review forms is to help reviewers structure their reviews in such a way that they are helpful for the area chairs in making final acceptance decisions, informative for the authors (so they understand the decisions that were made), and helpful for the authors (as they improve their work either for camera ready, or for submission to a later venue). To that end, we ask you to follow these guidelines as you prepare your reviews:

Be timely: Even if you don’t plan to start your reviews as soon as they are assigned, please do log in to START and see which papers you got. This will allow you to notify us of conflicts of interest in time for us to reassign the paper. Furthermore, please don’t count on finishing your reviews at the last minute. As we all know, things can come up, and that time you were counting on might not be there. As we coordinate the efforts of 1200+ PC members, it is imperative that everyone be timely.

Be constructive: Be sure to state what you find valuable about each paper, even if this is difficult to do. There’s a person on the other end of your review, who has put thought and effort into their paper. Your suggestions for improvement will be better received if the author can also see that you understood what they were trying to do. Normative statements (e.g. “insufficient evaluation”) are much more valuable to both authors and chairs when there are supporting explanations, so include them.

Be thorough: Read both the review forms and your papers carefully and provide detailed comments. We ask for scores on specific dimensions because we want you to consider those dimensions as you evaluate the paper. But ultimately, your comments will be more helpful, both to the ACs and to the authors, than the numerical scores. So please comment on each of the points as well in the text of your review. Note, too, that we have quite different review forms for different paper types, because we believe that different paper types should be evaluated in (somewhat) different ways (e.g. a position paper shouldn’t be criticized for not including an evaluation section). Please look at the review form before reading the paper so you know what you are looking for.

Maintain confidentiality: As a professional researcher, we have confidence you already know that this entire process is confidential, and how to treat it that way. Do not share the papers you review discuss their contents with others. Do not appropriate the ideas in the paper.

Author responsibilities and the COLING 2018 desk reject policy

As our field experiences an upswing in participation, we have more submissions to our conferences, and this means we have to be careful to keep the reviewing process as efficient as possible. One tool used by editors and chairs is the “desk reject”. This is a way to filter out papers that clearly shouldn’t get through for whatever reason, without asking area chairs and reviewers to handle them, leaving our volunteers to use their energy on the important process of dealing with your serious work.

A desk reject is an automatic rejection without further review. This saves time, but is also quite a strong reaction to a submission. For that reason, this post clarifies possible reasons for a desk reject and the stages at which this might occur. It is the responsibility of the authors to make sure to avoid these situations.

Reasons for desk rejects:

  • Page length violations. The content limit at COLING is 9 pages. (You may include as many pages as needed for references.) Appendices, if part of the main paper, must be put into that nine pages. It’s unfair to judge longer papers against those that have kept to the limit and so exceeding the page limit means a desk reject.
  • Template cheating. The LaTeX and Word templates give a level playing field for everyone. Squeezing out whitespace, adjusting margins, and changing the font size all stop that playing field from being even and give an unfair advantage. If you’re not using the official template, you’ve altered that template, or the way a manuscript uses it goes beyond our intent, then the paper may be desk rejected.
  • Missing or poor anonymisation. It’s well-established that non-anonymised papers from “big name” authors and institutions fare better during review. To avoid this effect, and others, COLING is running double-blind; see our post on the nuances of double-blinding. We do not endeavour to be arbiters of what does or does not constitute a “big name”—rather, any paper that is poorly anonymised (or not anonymised at all) will face desk reject. See below for a few more comments on anonymisation.
  • Inappropriate content. We want to give our reviewers and chairs research papers to review. Content that really does not fit this will be desk rejected.
  • Plagiarism. Submitting work that has already appeared, has already been accepted for publication at another venue, or has any significant overlap with other works submitted to COLING will be desk rejected. Several major NLP conferences are actively collaborating on this.
  • Breaking the arXiv embargo. COLING follows the ACL pre-print policy. This means that only papers not published on pre-print services or published on pre-print services more than a month before the deadline (i.e. before February 16, 2018) will be considered. Pre-prints published after this date (non-anonymously) may not be submitted for review at COLING. In conjunction with other NLP conferences this year, we’ll be looking for instances of this and desk rejecting them.

The desk rejects are determined at four separate points. In order,

  1. Automatic rejection by the START submission system, which has a few checks at various levels.
  2. A rejection by the PC co-chairs, before papers are allocated to areas.
  3. After papers are placed in areas, ACs have the opportunity to check for problems. One response is to desk reject.
  4. Finally, during and immediately after allocation of papers to reviewers, an individual reviewer may send a message to invoke desk rejection, which will be queried and checked by at least two people from the ACs or PC co-chairs.

As an honest researcher trying to publish your important and exciting work, the above probably do not apply to you. But if they do, please think twice. We would prefer to send out no desk rejects and imagine it would be much more pleasant for our authors if none were to receive a desk reject. So, now you know what to avoid!

Postscript on anonymisation

Papers must be anonymised. This protects everybody during review. It’s a complex issue to implement, which is why we earlier had a post dedicated to double blindness in peer review. There are strict anonymisation guidelines in the call for papers and the only way to be sure that nobody takes exception during the review process is to follow these guidelines.

We’ve received several questions on what the best practices for anonymisation are.  We realize that in long-standing projects, it can be impossible to truly disguise the group that work comes from.  Nonetheless, we expect all COLING authors to follow the forms of anonymisation:

  1. Do NOT include author names/affiliations in the version of the paper submitted for review.  Instead, the author block should say “Anonymous”.
  2. When making reference to your own published work, cite it as if written by someone else: “Following Lee (2007), …” “Using the evaluation metric proposed by Garcia (2016), …”
  3. The only time it’s okay to use “anonymous” in a citation is when you are referring to your own unpublished work: “The details of the construction of the data are described in our companion paper (anonymous, under review).”
  4. Expanded versions of earlier workshop papers should rework the prose sufficiently so as not to turn up as potential plagiarism examples. The final published version of such papers should acknowledge the earlier workshop paper, but that should be suppressed in the version submitted for review.
  5. More generally, the acknowledgments section should be left out of the version submitted for review.
  6. Papers making code available for reproducibility or resources available for community use should host a version of that at a URL that doesn’t reveal the authors’ identity or  institution.

We have been asked a few times about whether LRE Map entries can be done without de-anonymising submissions.  The LRE Map data will not be shared with reviewers, so this is not a concern.

Keeping resources anonymised is a little harder. We recommend you keep things like names of people and labs out of your code and files; for example, Java code uploaded that ran within an edu.uchicago.nlp namespace would be problematic. Similarly, if the URL given is within a personal namespace, this breaks double-blindness, and must be avoided. Google Drive, Dropbox and Amazon S3 – as well as many other file-sharing services – offer reasonably anonymous (and often free) file sharing URLs, and we recommend you use those if you can’t upload your data/code/resources into START as supplementary materials.

 

 

LRE Map: What? Why? When? Who?

This guest post is by Nicoletta Calzolari.

Not-documented Language Resources (LRs) don’t exist!

The LRE Map of Language Resources (data and tools) (http://lremap.elra.info) is an innovative instrument introduced at LREC2010 with the aim of monitoring the wealth of data and technologies developed and used in our field. Why “Map”? Because we aimed at representing the relevant features of a large territory, also for the aspects not represented in the official catalogues of the major players of the field. But we had other purposes too: we wanted to draw attention to the importance of the LRs that are behind many of our papers and to map also the “use” of LRs, to understand the purposes of the developed LRs.

Its collaborative, bottom-up, creation was critical: we conceived the Map as a means to influence a “change of culture” in our community, whereby everyone is asked to make a minimal effort to document the LRs that are used or created, thus understanding the need of proper documentation. By spreading the LR documentation effort across many people instead of leaving it only in the hands of the distribution centres, we also encourage awareness of the importance of metadata and proper documentation. Documenting a resource is the first step for making it identifiable, which in its turn is the first step towards reproducibility.

We kept the requested information at a simple level, knowing that we had to compromise between richness of metadata and willingness of authors to fill them in.

With all these purposes in mind we thought we could exploit the great opportunity offered by LREC and the involvement of so many authors from so many countries, from different modalities and working in so many areas of NLP. Afterwards the Map was used also in the framework of other major Conferences, in particular by COLING, and this provides another opportunity for useful comparisons.

The number of LRs currently described in the Map is 7453 (instances), collected from 17 different conferences. The major conferences for which we have data on a regular basis are LREC and COLING.

With initiatives such as the LRE Map and “Share your LRs” (introduced in 2014) we want to encourage in the field of LT and LRs what is already in use in more mature disciplines, i.e. ensure proper documentation and reproducibility as a normal practice. We think that research is strongly affected also by such infrastructural (meta-research) activities and therefore we continue to promote – also through such initiatives – a greater visibility of LRs, the sharing of LRs in an easier way and the reproducibility of research results.

Here is the vision: it must become common practice also in our field that when you submit a paper either to a conference or a journal you are offered the opportunity to document and upload the LRs related to your research. This is even more important in a data-intensive discipline like NLP. The small cost that each of us will pay to document, share, etc. should be paid back from benefiting of others’ efforts.

What do we ask to colleagues submitting at COLING 2018? Please document all the LRs mentioned in your paper!

SemEval: Striving for Reproducibility in Research – Guest post

Being able to reproduce experiments and results is important to advancing our knowledge, but it’s not something we’ve always been able to do well. In a series of guest posts, we have invited perspectives and advice on reproducibility in NLP.

by Saif M. Mohammad, National Research Council Canada.

A shared task invites participation in a competition where system predictions are examined and ranked by an independent party on a common evaluation framework (common new training and test sets, common evaluation metrics, etc.). The International Workshop on Semantic Evaluation (SemEval) is a popular shared task platform for computational semantic analysis. (See SemEval-2017; participate in SemEval-2018!) Every year, the workshop selects a dozen or so tasks (from a competitive pool of proposals) and co-ordinates their organizationthe setting up of task websites, releasing training and test sets, conducting evaluations, and publishing proceedings. It draws hundreds of participants, and publishes over a thousand pages of proceedings. It’s awesome!

Embedded in SemEval, but perhaps less obvious, is a drive for reproducibility in researchobtaining the same results again, using the same method. Why does reproducibility matter? Reproducibility is a foundational tenet of the scientific method. There is no truth other than reproducibility. If repeated data annotations provide wildly diverging labels, then that data is not capturing anything meaningful. If no one else is able to replicate one’s algorithm and results, then that original work is called into question. (See Most Scientists Can’t Replicate Studies by their Peers and also this wonderful article by Ted Pedersen, Empiricism Is Not a Matter of Faith.)

I have been involved with SemEval in many roles: from a follower of the work, to a participant, a task organizer, and co-chair. In this post, I share my thoughts on some of the key ways in which SemEval encourages reproducibility, and how many of these initiatives can easily be carried over to your research (whether or not it is part of a shared task).

SemEval has two core components:

Tasks: SemEval chooses a mix of repeat tasks (tasks that were run in prior years), new-to-SemEval tasks (tasks studied separately by different research groups, but not part of SemEval yet), and some completely new tasks. The completely new tasks are exciting and allow the community to make quick progress. The new-to-SemEval tasks allow for the comparison and use of disparate past work (ideas, algorithms, and linguistic resources) on a common new test set. The repeat tasks allow participants to build on past submissions and help track progress over the years. By drawing the attention of the community to a set of tasks, SemEval has a way of cleaning house. Literature is scoured, dusted, and re-examined to identify what generalizes wellwhich ideas and resources are truly helpful.

Bragging rights apart, a common motivation to participate in SemEval is to test whether a particular hypothesis is true or not. Irrespective of what rank a system attains, participants are encouraged to report results on multiple baselines, benchmarks, and comparison submissions.

Data and Resources: The common new (previously unseen) test set is a crucial component of SemEval. It minimizes the risk of highly optimistic results from (over)training on a familiar dataset. Participants usually have only two or three weeks from when they get access to the test set to when they have to provide system submissions. Task organizers often provide links to code and other resources that participants can use, including baseline systems and the winning systems from the past years. Participants can thus build on these resources.

SemEval makes a concerted effort to keep the data and the evaluation framework for the shared tasks available through the task websites even after the official competition. Thus, people with new approaches can continue to compare results with that of earlier participants, even years later. The official proceedings record the work done by the task organizers and participants.

Task Websites: For each task, the organizers set up a website providing details of the task definition, data, annotation questionnaires, links to relevant resources, and references. Since 2017, the tasks are run on shared task platforms such as CodaLab. They include special features such as phases and leaderboards. Phases often correspond to a pre-evaluation period (when systems have access to the training data but not the test data), the official evaluation period (when the test data is released and official systems submissions are to be made), and a post-evaluation period. The leaderboard is a convenient way to record system results. Once the organizers set up the task website with the evaluation script, the system automatically generates results on every new submission and uploads it on the leaderboard. There is a separate leaderboard for each phase. Thus, even after the official competition has concluded, one can upload submissions, and the auto-computed results are posted on the leaderboard. Anyone interested in a task can view all of the results in one place.

SemEval also encourages participants to make system submissions freely available and to make system code available where possible.

Proceedings: For each task, the organizers write a task-description paper that describes their task, data, evaluation, results, and a summary of participating systems. Participants write a system-description paper describing their system and submissions. Special emphasis is paid to replicability in the instructions to authors and in the reviewing process. For the task paper: “present all details that will allow someone else to replicate the data creation process and evaluation.” For the system paper: “present all details that will allow someone else to replicate your system.” All papers are accepted except for system papers that fail to provide clear and  adequate details of their submission. Thus SemEval is also a great place to record negative results — ideas that seemed promising but did not work out.

Bonus article: Why it’s time to publish research “failures”

All of the above make SemEval a great sandbox for working on compelling tasks, reproducing and refining ideas from prior research, and developing new ones that are accessible to all. Nonetheless, shared tasks can entail certain less-desirable outcomes that are worth noting and avoiding:

  • Focus on rankings: While the drive to have the top-ranked submission can be productive, it is not everything. More important is the analysis to help improve our collective understanding of the task. Thus, irrespective of one’s rank, it is useful to test different hypotheses and report negative results. 
  • Comparing post-competition results with official competition results: A crucial benefit of participating under the rigor of a shared task is that one does not have access to the reference/gold labels of the test data until the competition has concluded. This is a benefit because having open access to the reference labels can lead to unfair and unconscious optimisation on the test set. Every time one sees the result of their system on a test set and tries something different, it is a step towards optimising on the test set. However, once the competition has concluded the gold labels are released so that the task organizers are not the only gatekeepers for analysis. Thus, even though post-competition work on the task–data combination is very much encouraged, the comparisons of those results with the official competition results have to pass a higher bar of examination and skepticism.

There are other pitfalls worth noting too—feel free to share your thoughts in the comments.

“That’s great!” you say, “but we are not always involved in shared tasks…”

How do I encourage reproducibility of *my* research?

Here are some pointers to get started:

  • In your paper, describe all that is needed for someone else to reproduce the work. Make use of provisions for Appendices. Don’t be limited by page lengths. Post details on websites and provide links in your paper.
  • Create a webpage for the research project. Briefly describe the work in a manner that anybody interested can come away understanding what you are working on and why that matters. There is merit in communicating our work to people at large, and not just to our research peers. Also:
    • Post the project papers or provide links to them.
    • Post annotation questionnaires.
    • Post the code on repositories such as GitHub and CodaLab. Provide links.
    • Share evaluation scripts.
    • Provide interactive visualisations to explore the data and system predictions. Highlight interesting findings.
    • Post tables with results of work on a particular task of interest. This is especially handy if you are working on a new task or creating new data for a task. Use tools such as CodaLab to create leaderboards and allow others to upload their system predictions.
    • If you are releasing data or code, briefly describe the resource, and add information on:
      • What can the resource be used for and how?
      • What hypotheses can be tested with this resource?
      • What are the properties of the resource — its strengths, biases, and limitations?
      • How can one build on the resource to create something new?
  • (Feel free to add more suggestions through your comments below.)

Sharing your work often follows months and years of dedicated research. So enjoy it, and don’t forget to let your excitement shine through! 🙂

Many thanks to Svetlana Kiritchenko, Graeme Hirst, Ted Pedersen, Peter Turney, and Tara Small for comments and discussions.

References:

Area Chairs – and Areas

Traditionally, areas are prescribed by program chairs, in anticipation of the field’s interests. This can lead to last minute scrambles as the number of submissions ends up varying widely across areas.  To avoid this, we chose to follow the methodology developed by  Ani Nenkova and Owen Rambow for NAACL 2016 in sunny San Diego, CA, USA.  For COLING 2018 we have not defined areas directly, but rather will let them emerge from the interests of the area chairs, expressed in keywords. These keywords are then also used to allocate reviewers to areas, and later, papers. That’s why at COLING you won’t be directly asked to select an area for your paper at all; this is managed automatically. You will only be asked to select the type of your paper and describe its focus in keywords, to make sure it’s reviewed correctly. If you don’t know what paper types are available, we highly recommend you see the list of paper types and review questions. The keywords are exposed through the submission interface.

Each area has two area chairs, as previous experience has shown that it’s helpful to have a collaborator with whom to discuss decisions and to share the workload, but that larger groups can lead to lack of clarity in who’s doing what work.  We created the AC pairings automatically, keeping the following in mind:

  • We want to maximize similarity of AC research expertise (as captured by the keywords provided) in each pair, across the global pairing.
  • We want to minimize AC pairs where there is a large timezone difference, to foster quick troubleshooting and discussion (in the end, we ended up with one pair not in the same global region).

In addition, seven of the ACs have been designated not to a specific area but as “Special Circumstances” chairs, who can be called on to troubleshoot or advise as necessary.

Our final AC roster is as follows:

  • Afra Alishahi
  • Alexandre Rademaker
  • Alexis Palmer
  • Aline Villavicencio
  • Alvin Grissom II
  • Andrew Caines
  • Ann Clifton
  • Anna Rumshisky
  • Antske Fokkens
  • Arash Eshghi
  • Aurelie Herbelot
  • Avirup Sil
  • Barry Devereux
  • Chaitanya Shivade
  • Dan Garrette
  • Daniel Lassiter
  • David Schlangen
  • Deyi Xiong
  • Eric Nichols
  • Francis Bond
  • Frank Ferraro
  • Georgiana Dinu
  • Gerard de Melo
  • Gina-Anne Levow
  • Harry Bunt
  • Hatem Haddad
  • Isabelle Augenstein
  • Jiajun Zhang
  • Jose Camacho Collados
  • Klinton Bicknell
  • Lilja Øvrelid
  • Maja Popovic
  • Manuel Montes-y-Gómez
  • Marcos Zampieri
  • Marie-Catherine de Marneffe
  • Meliha Yetisgen
  • Michael Tjalve
  • Miguel Ballesteros
  • Mike Tian-Jian Jiang
  • Mohammad Taher Pilehvar
  • Na-Rae Han
  • Naomi Feldman
  • Natalie Schluter
  • Nathan Schneider
  • Nikola Ljubešić
  • Nurit Melnik
  • Qin Lu
  • Roman Klinger
  • Sadid A. Hasan
  • Sanja Štajner
  • Sara Tonelli
  • Sarvnaz Karimi
  • Sujian Li
  • Sunayana Sitaram
  • Tal Linzen
  • Valia Kordoni
  • Vivek Kulkarni
  • Viviane Moreira
  • Wei Xu
  • Wenjie Li
  • Xiang Ren
  • Xiaodan Zhu
  • Yang Feng
  • Yonatan Bisk
  • Yue Zhang
  • Yun-Nung Chen
  • Zachary Chase Lipton
  • Zeljko Agic
  • Zhiyuan Liu

With the following ACs in Special Circumstances, spread across the world’s timezones:

  • Anders Søgaard
  • Andreas Vlachos
  • Asad Sayeed
  • Di Jiang
  • Karin Verspoor
  • Kevin Duh
  • Steven Bethard

We are grateful to these distinguished scholars for the time and effort they are committing to COLING 2018!

Error analysis in research and writing

The COLING 2018 main conference deadline is in about eight weeks — have you integrated error analysis into your workflow yet?

One distinctive feature of our review forms for COLING 2018 is the question we’ve added about error analysis in the form for the NLP Engineering Experiment paper type. Specifically, we will ask reviewers to consider:

  • Error analysis: Does the paper provide a thoughtful error analysis, which looks for linguistic patterns in the types of errors made by the system(s) evaluated and sheds light on either avenues for future work or the source of the strengths/weaknesses of the systems?

Is error analysis required for NLP engineering experiment papers at COLING?

We’ve been asked this, in light of the fact that many NLP engineering experiment papers (by far the most common type of paper published in computational linguistics and NLP conferences of late) do not have error analysis and many of those are still influential, important and valuable.

Our response is of necessity somewhat nuanced. In our ideal world, all NLP engineering experiment papers at COLING 2018 would include thoughtful error analyses. We believe that this would amplify the contributions of the research we publish both in terms of short term interest and long term relevance. However, we also recognize that error analysis is not yet as prominent in the field as it could be and we’d say it should.

And so, our answer is that error analysis not a strict requirement. However, we ask our reviewers to look for it, and value it, and include the value of the error analysis in their overall evaluation of the papers they review. (And conversely, we absolutely do not want to see reviewers complaining that space in the paper is ‘wasted’ on error analysis.)

But why is error analysis so important?

As Antske Fokkens puts it in her excellent guest post on reproducibility:

The outcome becomes much more convincing if the hypothesis correctly predicts which kind of errors the new approach would solve compared to the baseline. For instance, if you predict that reinforcement learning reduces error propagation, investigate the error propagation in the new system compared to the baseline. Even if it is difficult to predict where improvement comes from, a decent error analysis showing which phenomena are treated better than by other systems, which perform as good or bad and which have gotten worse can provide valuable insights into why an approach works or, more importantly, why it does not.

In other words, a good error analysis tells us something about why method X is effective or ineffective for problem Y. This in turn provides a much richer starting point for further research, allowing us to go beyond throwing learning algorithms at the wall of tasks and seeing which stick, while allowing us to also discover which are the harder parts of a problem. And, as Antske also points out, a good error analysis makes it easier to publish papers about negative results. The observation that method X doesn’t work for problem Y is far more interesting if we can learn something about why not!

How do you do error analysis anyway?

Fundamentally, error analysis involves examining the errors made by a system and developing a classification of them. (This is typically best done over dev data, to avoid compromising held-out test sets.) At a superficial level, this can involve breaking things down by input length, token frequency or looking at confusion matrices. But we should not limit ourselves to examining only labels (rather than input linguistic forms) as with confusion matrices, or superficial properties of the linguistic signal. Languages are, after all, complex systems and linguistic forms are structured. So a deeper error analysis involves examining those linguistic forms and looking for patterns. The categories in the error analysis typically aren’t determined ahead of time, but rather emerge from the data. Does your sentiment analysis system get confused by counterfactuals? Does your event detection system miss negation not expressed by a simple form like not? Does your MT system trip up on translating pronouns especially when they are dropped in the source language? Do your morphological analysis system or string-based features meant to capture noisy morphology make assumptions about the form and position of affixes that aren’t equally valid across test languages?

As Emily noted in a guest post over on the NAACL PC blog:

Error analysis of this type requires a good deal of linguistic insight, and can be an excellent arena for collaboration with linguists (and far more rewarding to the linguist than doing annotation). Start this process early. The conversations can be tricky, as you try to explain how the system works to a linguist who might not be familiar with the type of algorithms you’re using and the linguist in turn tries to explain the patterns they are seeing in the errors. But they can be rewarding in equal measure as the linguistic insight brought out by the error analysis can inform further system development.

Why COLING?

This brings us to why COLING in particular should be a leader in placing the spotlight on error analysis: As we noted in a previous blog post, COLING has a tradition of being a locus of interdisciplinary communication between (computational) linguistics and NLP as practiced in computer science. Error analysis is a key, under-discussed component of our research process that benefits from such interdisciplinary communication.

Workshop review process for ACL, COLING, EMNLP, and NAACL 2018

This guest post by the workshop chairs describes the process by which workshops were reviewed for COLING and the other major conferences in 2018 and how they were allocated.

For approximately the last 10 years, ACL, COLING, EMNLP, and NAACL have issued a joint call for workshops. While this adds an additional level of effort and coordination for the conference organizers, it lets workshop organizers focus on putting together a strong program and helps to ensure a balanced set of offerings for attendees across the major conferences each year. Workshop proposals are submitted early in the year, and specify which conference(s) they prefer or require. A committee composed of the workshop chairs of each conference then undertakes a review process of the proposals, and decides which proposals to accept, and an assignment of venues. This blog post explains how the process worked in 2018, and largely followed the guidance on the ACL wiki.

We began by gathering the workshop chairs in August 2017. At that time, workshop chairs from ACL (Brendan O’Connor, Eva Maria Vecchi), COLING (Tim Baldwin, Yoav Goldberg, Jing Jiang), and NAACL (Marie Meteer, Jason Williams) had been appointed, but EMNLP (which occurs last of the 4 events in 2018) had not. This group drafted the call for workshops, largely following previous calls.

The call was issued on August 31, 2017, and specified a due date of October 22, 2017. During those months, the workshop chairs from EMNLP were appointed (Marieke van Erp, Vincent Ng) and joined the committee, which now consisted of 9 people. We received a total of 58 workshop proposals.

We went into the review process with the following goals:

  • Ensure a high-quality workshop program across the conferences
  • Ensure that the topics are relevant to the research community
  • Avoid having topically very similar workshops at the same conference
  • For placing workshops in conferences, follow proposer’s preferences wherever possible, diverging only in cases where there existed space limitations and/or substantial topical overlap

In addition to quality and relevance, it is worth noting here that space is an important consideration for workshops. Each conference has a fixed set of meeting rooms available for workshops, and the sizes of those rooms varies widely, with the smallest room holding 44 people, and the largest holding 500. We therefore made considerable effort to estimate the expected attendance at workshops (explained more below).

We started by having each proposal reviewed by 2 members of the committee, with most committee members reviewing around 15 proposals. To aid in the review process, we attempted to first categorize the workshop proposals, to help align proposals with areas of expertise on the committee. This categorization proved quite difficult because many proposals intentionally spanned several disciplines, but it did help identify proposals that were similar.

Our review form included the following questions:

  • Relevance: Is the topic of this workshop interesting for the NLP community?
  • Originality: Is the topic of this workshop original? (“no” not necessarily a bad thing)
  • Variety: Does the topic of this workshop add to the diversity of topics discussed in the NLP community? (“no” not necessarily a bad thing)
  • Quality of organizing team: Will the organisers be able to run a successful workshop?
  • Quality of program committee: Have the organisers drawn together a high-quality PC?
  • Quality of invited speakers (if any): Have high-quality, appropriate invited speaker(s) been identified by the organisers?
  • Quality of proposal: Is the topic of the workshop motivated and clearly explained?
  • Coherence: Is the topic of the workshop coherent?
  • Size (smaller size not necessarily a bad thing):
    • Number of previous attendees: Is there an indication of previous numbers of workshop attendees, and if so, what is that number?
    • Number of previous submissions: Is there an indication of previous numbers of submissions, and if so, what is that number?
    • Projected number of attendees: Is there an indication of projected numbers of workshop attendees, and if so, what is that number?
  • Recommendation: Final recommendation
  • Text comments to provide to proposers
  • Text comments for internal committee use

As was done last year, we also surveyed ACL members to seek input on which workshops people were likely to attend. We felt this survey would be useful in two respects. First, it gave us some additional signal on the relative attendance at each workshop (in addition to workshop organizers’ estimates), which helps assign workshops to appropriately sized rooms. Second, it gave us a rough signal about the interest level from the community. We expected that results from this type of survey are almost certainly biased, and kept this in mind when interpreting results.

Before considering the bulk of the 58 submissions, we note that there are a handful of large, long-standing workshops which the ACL organization agrees to pre-admit, including *SEM, WMT, CoNLL, and SemEval. These were all placed at their first-choice venue.

We then dug into our main responsibility of making accept/reject and placement decisions for the bulk of proposals. In making these decisions, we took into account proposal preferences, our reviews, available space, and results from the survey. Although we operated as a joint committee, ultimately the workshop chairs for each conference took responsibility for workshops accepted to their conference.

We first examined space. These 4 conferences in 2018 each had between 8 and 14 rooms available over 2 days, with room capacities ranging from 40 to 500 people. The total space available nearly matched the number of proposals. Specifically — had all proposals been accepted — there was enough space for all but 3 proposals to be at their first choice venue, and the remaining 3 at their second choice.

Considering the reviews, the 2 reviews per paper were very low-variance: about ⅔ of the final recommendations were identical, and the remaining ⅓ differed by 1 point on a 4-point scale. Overall, we were very impressed by the quality of the proposals, which covered a broad range of topics with strong organizing committees, reviewers, and invited speakers. None of the reviewers recommended 1 (clear reject) for any proposal. Further, the survey results for most borderline proposals showed reasonable interest from the community.

We also considered topicality. Here we found that there were 5 pairs of workshops where each requested the same conference as their first choice, and were topically very similar. In four of the pairs, we assigned a workshop to its second choice conference. In the final pair, in light of all the factors listed above, one workshop was rejected.

In summary, of the 58 proposals, 53 workshops were accepted to their first-choice conference; 4 were accepted to their second-choice conference; and 1 was rejected.

For the general chairs of *ACL conferences next year, we would definitely recommend continuing to organize a similarly large number of workshop rooms. For workshop chairs, we stress that reviewing and selecting workshops is qualitatively different than reviewing and selecting papers; for this reason, we recommend reviewing the proposals among the committee rather than recruiting reviewers (as was previously pointed out by the workshop chairs from the previous year). We would also suggest having workshop chairs consider using a structured form for workshop submissions, since a fair amount of manual effort was required to extract structured data from each proposal document.

WORKSHOP CO-CHAIRS

For ACL:
Brendan O’Connor, University of Massachusetts Amherst
Eva Maria Vecchi, University of Cambridge

For COLING:
Tim Baldwin, University of Melbourne
Yoav Goldberg, Bar Ilan University
Jing Jiang, Singapore Management University

For NAACL:
Marie Meteer, Brandeis University
Jason Williams, Microsoft Research

For EMNLP:
Marieke van Erp, KNAW Humanities Cluster
Vincent Ng, University of Texas at Dallas