Error analysis in research and writing

The COLING 2018 main conference deadline is in about eight weeks — have you integrated error analysis into your workflow yet?

One distinctive feature of our review forms for COLING 2018 is the question we’ve added about error analysis in the form for the NLP Engineering Experiment paper type. Specifically, we will ask reviewers to consider:

  • Error analysis: Does the paper provide a thoughtful error analysis, which looks for linguistic patterns in the types of errors made by the system(s) evaluated and sheds light on either avenues for future work or the source of the strengths/weaknesses of the systems?

Is error analysis required for NLP engineering experiment papers at COLING?

We’ve been asked this, in light of the fact that many NLP engineering experiment papers (by far the most common type of paper published in computational linguistics and NLP conferences of late) do not have error analysis and many of those are still influential, important and valuable.

Our response is of necessity somewhat nuanced. In our ideal world, all NLP engineering experiment papers at COLING 2018 would include thoughtful error analyses. We believe that this would amplify the contributions of the research we publish both in terms of short term interest and long term relevance. However, we also recognize that error analysis is not yet as prominent in the field as it could be and we’d say it should.

And so, our answer is that error analysis not a strict requirement. However, we ask our reviewers to look for it, and value it, and include the value of the error analysis in their overall evaluation of the papers they review. (And conversely, we absolutely do not want to see reviewers complaining that space in the paper is ‘wasted’ on error analysis.)

But why is error analysis so important?

As Antske Fokkens puts it in her excellent guest post on reproducibility:

The outcome becomes much more convincing if the hypothesis correctly predicts which kind of errors the new approach would solve compared to the baseline. For instance, if you predict that reinforcement learning reduces error propagation, investigate the error propagation in the new system compared to the baseline. Even if it is difficult to predict where improvement comes from, a decent error analysis showing which phenomena are treated better than by other systems, which perform as good or bad and which have gotten worse can provide valuable insights into why an approach works or, more importantly, why it does not.

In other words, a good error analysis tells us something about why method X is effective or ineffective for problem Y. This in turn provides a much richer starting point for further research, allowing us to go beyond throwing learning algorithms at the wall of tasks and seeing which stick, while allowing us to also discover which are the harder parts of a problem. And, as Antske also points out, a good error analysis makes it easier to publish papers about negative results. The observation that method X doesn’t work for problem Y is far more interesting if we can learn something about why not!

How do you do error analysis anyway?

Fundamentally, error analysis involves examining the errors made by a system and developing a classification of them. (This is typically best done over dev data, to avoid compromising held-out test sets.) At a superficial level, this can involve breaking things down by input length, token frequency or looking at confusion matrices. But we should not limit ourselves to examining only labels (rather than input linguistic forms) as with confusion matrices, or superficial properties of the linguistic signal. Languages are, after all, complex systems and linguistic forms are structured. So a deeper error analysis involves examining those linguistic forms and looking for patterns. The categories in the error analysis typically aren’t determined ahead of time, but rather emerge from the data. Does your sentiment analysis system get confused by counterfactuals? Does your event detection system miss negation not expressed by a simple form like not? Does your MT system trip up on translating pronouns especially when they are dropped in the source language? Do your morphological analysis system or string-based features meant to capture noisy morphology make assumptions about the form and position of affixes that aren’t equally valid across test languages?

As Emily noted in a guest post over on the NAACL PC blog:

Error analysis of this type requires a good deal of linguistic insight, and can be an excellent arena for collaboration with linguists (and far more rewarding to the linguist than doing annotation). Start this process early. The conversations can be tricky, as you try to explain how the system works to a linguist who might not be familiar with the type of algorithms you’re using and the linguist in turn tries to explain the patterns they are seeing in the errors. But they can be rewarding in equal measure as the linguistic insight brought out by the error analysis can inform further system development.

Why COLING?

This brings us to why COLING in particular should be a leader in placing the spotlight on error analysis: As we noted in a previous blog post, COLING has a tradition of being a locus of interdisciplinary communication between (computational) linguistics and NLP as practiced in computer science. Error analysis is a key, under-discussed component of our research process that benefits from such interdisciplinary communication.

Workshop review process for ACL, COLING, EMNLP, and NAACL 2018

This guest post by the workshop chairs describes the process by which workshops were reviewed for COLING and the other major conferences in 2018 and how they were allocated.

For approximately the last 10 years, ACL, COLING, EMNLP, and NAACL have issued a joint call for workshops. While this adds an additional level of effort and coordination for the conference organizers, it lets workshop organizers focus on putting together a strong program and helps to ensure a balanced set of offerings for attendees across the major conferences each year. Workshop proposals are submitted early in the year, and specify which conference(s) they prefer or require. A committee composed of the workshop chairs of each conference then undertakes a review process of the proposals, and decides which proposals to accept, and an assignment of venues. This blog post explains how the process worked in 2018, and largely followed the guidance on the ACL wiki.

We began by gathering the workshop chairs in August 2017. At that time, workshop chairs from ACL (Brendan O’Connor, Eva Maria Vecchi), COLING (Tim Baldwin, Yoav Goldberg, Jing Jiang), and NAACL (Marie Meteer, Jason Williams) had been appointed, but EMNLP (which occurs last of the 4 events in 2018) had not. This group drafted the call for workshops, largely following previous calls.

The call was issued on August 31, 2017, and specified a due date of October 22, 2017. During those months, the workshop chairs from EMNLP were appointed (Marieke van Erp, Vincent Ng) and joined the committee, which now consisted of 9 people. We received a total of 58 workshop proposals.

We went into the review process with the following goals:

  • Ensure a high-quality workshop program across the conferences
  • Ensure that the topics are relevant to the research community
  • Avoid having topically very similar workshops at the same conference
  • For placing workshops in conferences, follow proposer’s preferences wherever possible, diverging only in cases where there existed space limitations and/or substantial topical overlap

In addition to quality and relevance, it is worth noting here that space is an important consideration for workshops. Each conference has a fixed set of meeting rooms available for workshops, and the sizes of those rooms varies widely, with the smallest room holding 44 people, and the largest holding 500. We therefore made considerable effort to estimate the expected attendance at workshops (explained more below).

We started by having each proposal reviewed by 2 members of the committee, with most committee members reviewing around 15 proposals. To aid in the review process, we attempted to first categorize the workshop proposals, to help align proposals with areas of expertise on the committee. This categorization proved quite difficult because many proposals intentionally spanned several disciplines, but it did help identify proposals that were similar.

Our review form included the following questions:

  • Relevance: Is the topic of this workshop interesting for the NLP community?
  • Originality: Is the topic of this workshop original? (“no” not necessarily a bad thing)
  • Variety: Does the topic of this workshop add to the diversity of topics discussed in the NLP community? (“no” not necessarily a bad thing)
  • Quality of organizing team: Will the organisers be able to run a successful workshop?
  • Quality of program committee: Have the organisers drawn together a high-quality PC?
  • Quality of invited speakers (if any): Have high-quality, appropriate invited speaker(s) been identified by the organisers?
  • Quality of proposal: Is the topic of the workshop motivated and clearly explained?
  • Coherence: Is the topic of the workshop coherent?
  • Size (smaller size not necessarily a bad thing):
    • Number of previous attendees: Is there an indication of previous numbers of workshop attendees, and if so, what is that number?
    • Number of previous submissions: Is there an indication of previous numbers of submissions, and if so, what is that number?
    • Projected number of attendees: Is there an indication of projected numbers of workshop attendees, and if so, what is that number?
  • Recommendation: Final recommendation
  • Text comments to provide to proposers
  • Text comments for internal committee use

As was done last year, we also surveyed ACL members to seek input on which workshops people were likely to attend. We felt this survey would be useful in two respects. First, it gave us some additional signal on the relative attendance at each workshop (in addition to workshop organizers’ estimates), which helps assign workshops to appropriately sized rooms. Second, it gave us a rough signal about the interest level from the community. We expected that results from this type of survey are almost certainly biased, and kept this in mind when interpreting results.

Before considering the bulk of the 58 submissions, we note that there are a handful of large, long-standing workshops which the ACL organization agrees to pre-admit, including *SEM, WMT, CoNLL, and SemEval. These were all placed at their first-choice venue.

We then dug into our main responsibility of making accept/reject and placement decisions for the bulk of proposals. In making these decisions, we took into account proposal preferences, our reviews, available space, and results from the survey. Although we operated as a joint committee, ultimately the workshop chairs for each conference took responsibility for workshops accepted to their conference.

We first examined space. These 4 conferences in 2018 each had between 8 and 14 rooms available over 2 days, with room capacities ranging from 40 to 500 people. The total space available nearly matched the number of proposals. Specifically — had all proposals been accepted — there was enough space for all but 3 proposals to be at their first choice venue, and the remaining 3 at their second choice.

Considering the reviews, the 2 reviews per paper were very low-variance: about ⅔ of the final recommendations were identical, and the remaining ⅓ differed by 1 point on a 4-point scale. Overall, we were very impressed by the quality of the proposals, which covered a broad range of topics with strong organizing committees, reviewers, and invited speakers. None of the reviewers recommended 1 (clear reject) for any proposal. Further, the survey results for most borderline proposals showed reasonable interest from the community.

We also considered topicality. Here we found that there were 5 pairs of workshops where each requested the same conference as their first choice, and were topically very similar. In four of the pairs, we assigned a workshop to its second choice conference. In the final pair, in light of all the factors listed above, one workshop was rejected.

In summary, of the 58 proposals, 53 workshops were accepted to their first-choice conference; 4 were accepted to their second-choice conference; and 1 was rejected.

For the general chairs of *ACL conferences next year, we would definitely recommend continuing to organize a similarly large number of workshop rooms. For workshop chairs, we stress that reviewing and selecting workshops is qualitatively different than reviewing and selecting papers; for this reason, we recommend reviewing the proposals among the committee rather than recruiting reviewers (as was previously pointed out by the workshop chairs from the previous year). We would also suggest having workshop chairs consider using a structured form for workshop submissions, since a fair amount of manual effort was required to extract structured data from each proposal document.

WORKSHOP CO-CHAIRS

For ACL:
Brendan O’Connor, University of Massachusetts Amherst
Eva Maria Vecchi, University of Cambridge

For COLING:
Tim Baldwin, University of Melbourne
Yoav Goldberg, Bar Ilan University
Jing Jiang, Singapore Management University

For NAACL:
Marie Meteer, Brandeis University
Jason Williams, Microsoft Research

For EMNLP:
Marieke van Erp, KNAW Humanities Cluster
Vincent Ng, University of Texas at Dallas

COLING as a Locus of Interdisciplinary Communication

The nature of the relationship between (computational) linguistics and natural language processing remains a hot topic in the field.  There is at this point a substantial history of workshops focused on how to get the most out of this interaction, including at least:

[There are undoubtedly more!  Please let us know what we’ve missed in the comments and we’ll add them to this list.]

The interaction between the fields also tends to be a hot-button topic on Twitter, leading to very long and sometimes informative discussions, such as the NLP/CL Megathread of April 2017 (as captured by Sebastian Mielke) or the November 2017 discussion on linguistics, NLP, and interdisciplinarity, summarized in blog posts by Emily M. Bender and Ryan Cotterell.

It is very important to us as PC co-chairs of COLING 2018 to continue the COLING tradition of providing a venue that encourages interdisciplinary work. COLING as a venue should host both computationally-aided linguistic analysis and linguistically informed work on natural language processing. Furthermore, it should provide a space for authors of each of these kinds of papers to provide feedback to each other.

Actions we have taken so far to support this vision include recruiting area chairs whose expertise spans the two fields as well as in the design of our paper types and associated review forms.

We’d like to see even more discussion of how interdiscipinarity works/can work in our field. What do you consider to be best practices for carrying out such interdisciplinary work? What role do you see for linguistics in NLP/how do computational methods inform your linguistic research? How do you build and maintain collaborations? When you read (or review) in this field, what kind of features of a paper stand out for you as particularly good approaches to interdisciplinary work? Finally, how can COLING further support such best practices?

 

Recruiting Area Chairs

An absolutely key ingredient for a successful conference is a stellar team of area chairs (ACs). What do we mean by stellar? We need people who take the task seriously, work hard to ensure fairness, bring their expertise to bear in selecting papers that make valuable contributions and constitute a vibrant program, can be effective leaders and get the reviewers to do their job well, and finally who represent a broad range of diverse interests and perspectives on our field. What a tall order!

On top of that, given the size of conferences in our field presently, we need a large team of such amazing colleagues. How big? We are planning for 2000 submissions (yikes!), which we will allocate evenly across 40 areas, so roughly 50 papers per area. We plan to have area chairs work in pairs, so we need 80 area chairs to cover 40 areas. In addition, we anticipate a range of troubleshooting and consulting beyond what we two as PC co-chairs can handle, and so we also want an additional 10 area chairs who can assist across areas, with START troubleshooting, handling papers with COI issues, and whatever else comes up. That means we’re looking for about 100 people total.

We decided to do the recruiting in two phases. The first phase involved recruiting 50 area chairs directly by invitation. Phase II is an open call for nominations (and self-nominations!) for the remaining 50 area chairs. The purpose of this blog post is to give you an update on how we are doing in terms of various metrics of diversity, and, more importantly, to alert you to the call for area chairs. If you would like to serve as area chair, or if you know someone who you’d like to nominate, please fill out this form.

As we select additional area chairs, we will be looking to round out the range of areas of expertise we have recruited so far (see below); maintain our gender balance; improve our regional diversity; improve the representation of area chairs from non-academic affiliations; and improve racial/ethnic diversity. The stats for our area chairs so far are as follows (based on a self-report survey we sent to the area chairs).

Research Interests

A diverse range of areas were described, from a free-text entry from. Those with multiple entries are shown in the chart, and the hapaxes listed below.

  • Accent Variation
  • Active Learning
  • Argument Mining
  • Aspect
  • Authorship Analysis (Attribution, Profiling, Plagiarism Detection)
  • Automatic Summarization
  • Biomedical/clinical Text Processing
  • BioNLP
  • Clinical NLP
  • Clustering
  • Code-mixing
  • Code-switching
  • Computational Cognitive Modeling
  • Computational Discourse
  • Computational Lexical Semantics
  • Computational Lexicography
  • Computational Morphology
  • Computational Pragmatics
  • Conversational AI
  • Conversation Modeling
  • Corpora Construction
  • Corpus Design And Development
  • Corpus Linguistics
  • Cross-language Speech Recognition
  • Cross-lingual Learning
  • Data Modeling And System Architecture
  • Dialogue Pragmatics
  • Dialogue System
  • Dialogue Systems
  • Discourse Modes
  • Discourse Parsing
  • Document Summarization
  • Emotion Analysis
  • Endangered Language Documentation
  • Evaluation
  • Event And Temporal Processing
  • Experimental Linguistics
  • Eye Movements
  • Fact Checking
  • Grammar Correction
  • Grammar Engineering
  • Grammar Induction
  • Grounded Language Learning
  • Grounded Semantics
  • HPSG
  • Incremental Language Processing
  • Information Retrieval
  • KA
  • Korean NLP
  • Language Acquisition
  • Lexical Resources
  • Linguistic Annotation
  • Linguistic Issues In NLP
  • Linguistic Processing Of Non-canonical Text
  • Low-resource Learning
  • Machine Reading
  • Modality
  • Multilingual Systems
  • Multimodal NLP
  • NER
  • NLG
  • NLP In Health Care & Education
  • NLU
  • Ontologies
  • Ontology Construction
  • Phonology
  • POS Tagging
  • Reading
  • Reasoning
  • Relation Extraction
  • Resources
  • Resources And Evaluation
  • Rhetorical Types
  • Semantic Parsing
  • Semantic Processing
  • Short-answer Scoring
  • Situation Types
  • Social Media
  • Social Media Analysis
  • Social Media Analytics
  • Software And Tools
  • Speech
  • Speech Perception
  • Speech Recognition
  • Speech Synthesis
  • Spoken Language Understanding
  • Stance Detection
  • Structured Prediction
  • Summarization
  • Syntactic And Semantic Parsing
  • Syntax/parsing
  • Tagging
  • Temporal Information Extraction
  • Text Classification
  • Text Mining
  • Text Simplification
  • Text Types
  • Transfer Learning
  • Treebanks
  • Vision And Language
  • Weakly Supervised Learning

Gender

We asked a completely open-ended question here, which was furthermore optional, and then binned the answers into the three categories female, male, and other/question skipped.

Country of affiliation

Another open-ended question, which we again binned by region.  Latin America is the Americas minus the US and Canada.  Australia is counted as Asia.  So far Africa is not represented.

 

Type of affiliation

Our survey anticipated five possible answers here: Academia, Industry – research lab, Industry – other, Government, Other; but only the first two are represented so far.

Race/ethnicity

We are interested in making sure that our senior program committee is diverse in terms of race/ethnicity, but it is very difficult to talk about what this means in an international context, because racial constructs are very much products of the cultures they are a part of. So rather than ask for specific race/ethnicity categories, which we would be unprepared to summarize across cultures, we decided to ask the following pair of questions, both of which were optional (like the question about gender):

As we work to make sure that our senior PC is appropriately diverse, we would like to consider race/ethnicity.  Yet, at the level of an international organization, it is very unclear what categories could possibly be appropriate for such a survey.  Accordingly, we have settled on the distinction minoritized (treated as a minority)/not minoritized (treated as normative/majority).

 

In the context of your country of current affiliation, and with respect to your race/ethnicity, are you: (optional)

  • Minoritized
  • Not minoritized

During your education or career prior to your current affiliation, has there ever been a significant period time during which you were minoritized with respect to your race/ethnicity? (optional)

  • Yes
  • No

Please join us!

We’re looking for about 50 more ACs!  Please consider nominating yourself and/or other people who you think would do a good job and also help us round out our leadership team along the various dimensions identified above.  Both self- and other-nominations can be done at this form. You can nominate as many people as you like (but only nominate yourself once, please 😉

 

Writing Mentoring Program

Submit your manuscript for mentoring here:  https://www.softconf.com/coling2018/mentoring/

Among the goals we outlined in our inaugural post was the following:

(1) to create a program of high quality papers which represent diverse approaches to and applications of computational linguistics written and presented by researchers from throughout our international community;

One of our strategies for achieving this goal is to create a writing mentoring program, which takes place before the reviewing stage. This optional program is focused on helping those who perhaps aren’t used to publishing in the field of computational linguistics, are early in their careers, and so on. We see mentoring as a tool that makes COLING accessible for broader range of high-quality ideas. In other words, this isn’t about pushing borderline papers into acceptance but rather alleviating presentational problems with papers that, in their underlying research quality, easily make the high required standard.

In order for this program to be successful, we need buy-in from prospective mentors. In this blog post, we provide the outlines of the program, in order to let the community (including both prospective mentors and mentees) know what we have in mind and to seek (as usual) your feedback.

We plan to run the mentoring program through the START system, as follows:

  • Anyone wishing to receive mentoring will submit an abstract by 4 weeks before the COLING submission deadline. Authors will be instructed that submitting an abstract at this point represents a commitment to submit a full draft by the mentoring deadline and then to submit to COLING.
  • Requesting mentoring doesn’t guarantee receiving mentoring and receiving mentoring doesn’t guarantee acceptance to the conference program.
  • Any reviewer willing to serve as mentor will bid on those abstracts and indicate how many papers total they are willing to mentor. Mentors will receive guidance from the program committee co-chairs on their duties as mentors, as well as a code of conduct.
  • Area chairs will assign papers to mentors by 3 weeks before the submission deadline, giving priority as follows. (Note that if there are not enough mentors, not every paper requesting mentoring will receive it.)
    1. Authors from non-anglophone institutions
    2. Authors from beyond well-represented institutions
  • Authors wishing to receive mentoring will submit complete drafts via START by 3 weeks before the submission deadline.
  • Mentors will provide feedback within one week, using a ‘mentoring form’ created by the PCs structured to encourage constructive feedback.
  • No mentor will serve as a reviewer for a paper they were mentor of.
  • Mentor bidding will be anonymous, but actual mentoring will not be (in either direction).
  • Mentors will be recognized in the conference handbook/website, but COLING will not indicate which papers received mentoring (though authors are free to acknowledge mentorship in their acknowledgments section).

As a starting point, here are our initial questions for the mentoring form:

  • What is the main claim or result of this paper?
  • What are the strengths of this paper?
  • What questions do you have as a reader?  What do you wish to know about the research that was carried out that is unclear as yet from the paper?
  • What aspect of the paper do you think the COLING audience will find most interesting?
  • Which paper category/review form do you think is most appropriate for this paper?
  • Taking into consideration the specific questions in that review form, in what ways could the presentation of the research be strengthened?
  • If you find places where there are grammatical or stylistic issues in writing, or in general, if you think certain improvements are possible in terms of overall organization and structure, please indicate these. It may be most convenient to do so by marking up a pdf with comments.

Regarding code of conduct, by signing up to mentor a paper, mentors agree to:

  • Maintain confidentiality: Do not share the paper draft or discuss its contents with others (without express permission from the author).  Do not appropriate the ideas in the paper.
  • Commit to prompt feedback: Read the paper and provide feedback via the form by the deadline specified.
  • Be constructive: Avoid sarcastic or harsh evaluative remarks; phrase feedback in terms of how to improve, rather than what is wrong or bad.

The benefits to authors are clear: Authors participating in the program will benefit because they will receive feedback on the presentation of their work, which if heeded, might also improve chances of acceptance as well as enhance the impact of the paper once published. Perhaps the benefits to mentors are more in need of articulation. Here are the benefits we see: Mentors will be recognized through a listing in the conference handbook and website, with outstanding mentors receiving further recognition. In addition, mentoring should be rewarding for the mentors because the exercise of giving constructive feedback on academic writing provides insight into what makes good writing. Finally, the mentoring program will benefit the entire COLING audience through both improved presentation of research results and improved diversity of authors included in the conference.

Our questions for our readership at this point are:

  1. What would make this program more enticing to you as a prospective mentor or author?
  2. As a prospective mentor or author, are there additional things you’d like to see in the mentoring form?
  3. Are there points you think we should add to the code of conduct?

 

Call for input: Paper types and associated review forms

In our opening post, we laid out our goals as PC co-chairs for COLING 2018. In this post, we present our approach to the subgoal (of goal #1) of creating a program with many different types of research contributions. As both authors and reviewers, we have been frustrated by the one-size-fits-all review form typical of conferences in our field. When reviewing, how do we answer the ‘technical correctness’ question about a position paper? Or the ‘impact of resources’ question on a paper that doesn’t present any resources?

We believe that a program that includes a wide variety of paper types (as well as a wide variety of paper topics) will be more valuable both for conference attendees and for the field as a whole. We hypothesize that more tailored review forms will lead to fairer treatment of different types of papers, and that fairer treatment will lead to a more varied program. Of course, if we don’t get many papers outside the traditional type (called “NLP engineering experiment paper” below), having tailored review forms won’t do us much good. Therefore, we aim to get the word out early (via this blog post) so that our audience knows what kinds of papers we’re interested in.

Furthermore, we’re interested in what kinds of papers you’re interested in. Below you will find our initial set of five categories, with drafts of the associated review forms. You’ll see some questions are shared across some or all of the paper types, but we’ve elected to lay them out this way (even though it might feel repetitive) so that you can look at each category, putting yourself in both the position of author and of reviewer, and think about what we might be missing/which questions might be inappropriate. Let us know in the comments!

As you answer, keep in mind that our goal with the review forms is to help reviewers structure their reviews in such a way that they are helpful for the area chairs in making final acceptance decisions, informative for the authors (so they understand the decisions that were made), and helpful for the authors (as they improve their work either for camera ready, or for submission to a later venue).

Computationally-aided linguistic analysis

The focus of this paper type is new linguistic insight.

  • Relevance: Is this paper relevant to COLING?
  • Readability/clarity: From the way the paper is written, can you tell what research question was addressed, what was done and why, and how the results relate to the research question?
  • Originality: How original and innovative is the research described? Originality could be in the linguistic question being addressed, in the methodology applied to the linguistic question, or in the combination of the two.
  • Technical correctness/soundness: Is the research described in the paper technically sound and correct? Can one trust the claims of the paper—are they supported by the analysis or experiments and are the results correctly interpreted?
  • Reproducibility: Is there sufficient detail for someone in the same field to reproduce/replicate the results?
  • Generalizability: Does the paper show how the results generalize, either by deepening our understanding of some linguistic system in general or by demonstrating methodology that can be applied to other problems as well?
  • Meaningful comparison: Does the paper clearly place the described work with respect to existing literature? Is it clear both what is novel in the research presented and how it builds on earlier work?
  • Substance: Does this paper have enough substance for a full-length paper, or would it benefit from further development?
  • Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Will people learn a lot by reading this paper or seeing it presented? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

NLP engineering experiment paper

This paper type matches the bulk of submissions at recent CL and NLP conferences.

  • Relevance: Is this paper relevant to COLING?
  • Readability/clarity: From the way the paper is written, can you tell what research question was addressed, what was done and why, and how the results relate to the research question?
  • Originality: How original and innovative is the research described? Note that originality could involve a new technique or a new task, or it could lie in the careful analysis of what happens when a known technique is applied to a known task (where the pairing is novel) or in the careful analysis of what happens when a known technique is applied to a known task in a new language.
  • Technical correctness/soundness: Is the research described in the paper technically sound and correct? Can one trust the claims of the paper—are they supported by the analysis or experiments and are the results correctly interpreted?
  • Reproducibility: Is there sufficient detail for someone in the same field to reproduce/replicate the results?
  • Error analysis: Does the paper provide a thoughtful error analysis, which looks for linguistic patterns in the types of errors made by the system(s) evaluated and sheds light on either avenues for future work or the source of the strengths/weaknesses of the systems?
  • Meaningful comparison: Does the paper clearly place the described work with respect to existing literature? Is it clear both what is novel in the research presented and how it builds on earlier work?
  • Substance: Does this paper have enough substance for a full-length paper, or would it benefit from further work?
  • Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Will people learn a lot by reading this paper or seeing it presented? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

Reproduction paper

The contribution of a reproduction paper lies in analyses of and in insights into existing methods and problems—plus the added certainty that comes with validating previous results.

  • Relevance: Is this paper relevant to COLING?
  • Readability/clarity: Is the paper well-written and well-structured?
  • Analysis: If the paper was able to replicate the results of the earlier work, does it clearly lay out what needed to be filled in in order to do so? If it wasn’t able to replicate the results of earlier work, does it clearly identify what information was missing/the likely causes?
  • Generalizability: Does the paper go beyond replicating the results on the original to explore whether they can be reproduced in another setting? Alternatively, in cases of non-replicability, does the paper discuss the broader implications of that result?
  • Informativeness: To what extent does the analysis reported in the paper deepen our understanding of the methodology used or the problem approached? Will the information in the paper help practitioners with their choice of technique/resource?
  • Meaningful comparison: In addition to identifying the experimental results being replicated, does the paper motivate why these particular results are an important target for reproduction and what the future implications are of their having been reproduced or been found to be non-reproducible?
  • Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Will people learn a lot by reading this paper or seeing it presented? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

Resource paper

Papers in this track present a new language resource. This could be a corpus, but also could be an annotation standard, tool, and so on.

  • Relevance: Is this paper relevant to COLING? Will the resource presented likely be of use to our community?
  • Readability/clarity: From the way the paper is written, can you tell how the resource was produced, how the quality of annotations (if any) was evaluated, and why the resource should be of interest?
  • Originality: Does the resource fill a need in the existing collection of accessible resources? Note that originality could be in the choice of language/language variety or genre, in the design of the annotation scheme, in the scale of the resource, or still other parameters.
  • Resource quality: What kind of quality control was carried out? If appropriate, was inter-annotator agreement measured, and if so, with appropriate metrics? Otherwise, what other evaluation was conducted, and how agreeable were the results?
  • Resource accessibility: Will it be straightforward for researchers to download or otherwise access the resource in order to use it in their own work? To what extent can work based on this resource be shared?
  • Metadata: Do the authors make clear whose language use is captured in the resource and to what populations experimental results based on the resource could be generalized to? In case of annotated resources, are the demographics of the annotators also characterized?
  • Meaningful comparison: Is the new resource situated with respect to existing work in the field, including similar resources it took inspiration from or improves on? Is it clear what is novel about the resource?
  • Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Will people learn a lot by reading this paper or seeing it presented? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

Position paper

A position paper presents a challenge to conventional thinking or a futuristic new vision. It could open up a new area or novel technology, propose changes in existing research, or give a new set of ground rules.

  • Relevance: Is this paper relevant to COLING?
  • Readability/clarity: Is it clear what the position is that the paper is arguing for? Are the arguments for it laid out in an understandable way?
  • Soundness: Are the arguments presented in the paper relevant and coherent? Is the vision well-defined, with success criteria? (Note: It should be possible to give a high score here even if you don’t agree with the position taken by the authors)
  • Creativity: How novel or bold is the position taken in the paper? Does it represent well-thought through and creative new ground?
  • Scope: How much scope for new research is opened up by this paper? What effect could it have on existing areas and questions?
  • Meaningful comparison: Is the paper well-situated with respect to previous work, both position papers (taking the same or opposing side on the same or similar issues) and relevant theoretical or experimental work?
  • Substance: Does the paper have enough substance for a full-length paper? Is the issue sufficiently important? Are the arguments sufficiently thoughtful and varied?
  • Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

 

So, those are the initial set of submission types. These types of paper aren’t limited to single tracks. That is to say, there won’t be a dedicated position paper track, with its own reviewers and chair. You might find a resource paper in any track, for example, and a multi-lingual embeddings track (if one appears—but that’s for a future post) might contain all five kinds of paper mixed together. This makes it even more important that the right questions are asked for a paper type, to help out hard-working reviewers with the task of judging each kind of paper in an appropriate light.

Our questions for you: Is there a type of paper you’d either like to submit to COLING or would like to see at COLING that you think doesn’t fit any of these five already? Should any of the review questions be dropped or refined for any of the paper types? Are there review questions it would be useful to add? Please let us know in the comments!