Reviewing in an interdisciplinary field

The process of peer review, when functioning at its best, ensures that work that is published in archival venues is carefully vetted, such that the results likely to be reliable and the presentation of those results interpretable by scholars in the field at large. But what does “peer review” mean for an interdisciplinary field? Who are the relevant “peers”? We believe that for both goals—vetting reliability of results and vetting readability of presentation—reviewing in an interdisciplinary field ideally involves reviewers coming from different perspectives.

In our particular context, we had the added (near-)novelty of our keyword-based area assignment system. (Near-novelty, because this was pioneered by NAACL 2016.) This means our areas are not named, but rather emerge from the clusters of both reviewer interests and paper topics. On the upside, this means that really popular topics (“semantics”, or “MT”) can be spread across areas, such that we don’t have area chairs scrambling for large numbers of additional reviewers. On the downside, the areas can’t be named until the dust has settled, so reviewers don’t necessarily have a clear sense of which area (in the traditional sense) they are assigned to. In addition, interests that were very popular and therefore not highly discriminative (e.g. “MT”) weren’t given very much weight alone by the clustering algorithm.

During the bidding process, we had a handful of requests from reviewers to change areas (really a small number, considering the overall size of the reviewing pool!). These requests came in three types. First, there were a few who just said: There’s relatively little in this area that I feel qualified to review, could I try a different area? In all cases, we were able to find another area that was a better match.

Second, there were reviewers who said “My research interest is X, and I don’t see any papers on X in my area. I only want to review papers on X.” We found these remarks a little surprising, as our understanding of the field is that it is not a collection of independent areas that don’t inform each other, but rather a collection of ways of looking at the same very general and very pervasive phenomenon: human language and the ways in which it can be processed by computers. Indeed, we structured the keywords into multiple groups—targets, tasks, approaches, languages and genres—to increase intersection on at least a few areas of any given reviewer’s expertise. We very much hope that the majority of researchers in our field read outside their specific subfield and are open to influence from other subfields on their own.

The third type was reviewers, typically of a more linguistic than computational orientation, who expressed concern that because they aren’t familiar with the details of the models being used, they wouldn’t be able to review effectively. To these reviewers, we pointed out that it is equally important to look critically at the evaluation (what data is being used and how) and the relationship of the work to the linguistic concepts it is drawing on. Having reviewers with deep linguistic expertise is critical and both COLING and the authors very much benefit from it.

To create the best cross-field review, then, it helps to examine each of one’s strengths and compare these with the multiple facets presented by any paper. No single reviewer is likely to be expert in every area and aspect of a manuscript; but, there’s a good chance that, as long as some care has been applied to matching, there will be some crossover expertise. Be bold with that expertise. And indeed, the coverage of knowledge that multiple reviewers have is often complementary. As a reviewer, you can bring some knowledge to reviewing at least one aspect of a paper—more so than others sharing the workload—even if that is not the aspect initially expected.

Leave a Reply

Your email address will not be published. Required fields are marked *