Open Science

The post that follows is by our guest author Alice Motes, who is a ‎Research Data and Preservation Manager at the ‎University of Surrey, UK.

What’s Open science?

Great question! Open science refers to a wide range of approaches to doing science including open access to publications, open data, open software/code, open peer review, and citizen science (among others). It is driven by principles of transparency, accessibility, and collaboration resulting in a very different model of production and dissemination of science than is currently practiced in most fields. There tends to be a gap between what scientists believe and how they actually behave. For example, most scientists agree that sharing data is important to the progress of science. However, fewer of those same scientists report sharing their data or having easily accessible data (Tenopis et al. 2011).

In many ways, open science is about engaging in full faith with the ideals of the scientific process, which prizes transparency, verification, reproducibility, and building on each other’s work to push the field forward. Open science encourages opening up all parts of the scientific process, but I want to focus on data. (Conveniently, the area I’m most familiar with! Funny that.) Open data is a natural extension of open access to academic publications.

Most scholars have probably benefited from open access to academic journals. (Or hit a paywall to a non-open access journal article. Did you know publishers have profit margins higher than Apple?) The strongest argument behind open access is that restrictions on who can get these articles slows scientific advancement and breakthroughs by disadvantaging scientists without access. Combine that with the fact that most research is partially or wholly funded by public money and it’s not a stretch to suggest that these outputs should be made available to the benefit of everyone, scientist and citizens.

Open data is extending this idea into the realm of the data, suggesting that sharing data for verification and reuse can catch errors earlier, foster innovative uses of data, and push science forward faster and more transparently to the benefit of the field. Not to mention the knock on benefits of those advances to the public and broader society. Some versions of open data advocate for broaden access beyond scientific communities into the public sphere, where data may be examined and reused in potentially entrepreneurial ways to the benefit of society and the economy. You may also see the term open data applied in relation to government agencies at all levels releasing data that they hold as part of a push for transparency in governance and potential reuse by entrepreneurs, like using Transport for London’s API to build travel apps.

What are the potential benefits to open data?

You mean beyond the benefits to your scholarly peers and broader society? Well there are lots of ways sharing data can be advantageous for you:

  • More citations – there’s evidence to suggest that papers with accompanying data get cited more. (Piwowar and Vision 2013).
  • More exposure and impact – more people will see your work, which could lead to more collaborations and publications.
  • Innovative reuse – your data may be useful to in ways you don’t anticipate outside your field, leading to interdisciplinary impact and more data citations.
  • Better reproducibility: The first reuser of your data is actually you! Plus, avoid a crisis. (Need more reasons? Check out selfish reproducibility).

Moreover, you’ll benefit from access to your peers shared data as well! Think about all the cool stuff you could do.

Great! I’m on board. How do I do it?

Well you just need to answer these three questions, really:

1. Can people get your data?

How are people going to find and download your files? Are you going to deposit the data into a repository?

2. Can people understand the data?

Ok so now they’ve got your data. Have you included enough documentation that they can understand your file organization, code, and supporting documents?

3. Can people use the data?

People have got a copy of your data and they know how to use it. Grand! But can they actually use it? Would someone have to buy expensive software to use it? Could you make a version of your data available in an open format? Have you supplied the code necessary to use the data? (Check out the Software Sustainability Institute for tips.)

For more check out the FAIR principles (Findable, Accessible, Interoperable and Reusable.)

Of course, there are some very good ethical, legal, and commercial reasons why sharing data is not possible, but I think the goal should be to strive towards the European Commission’s ideal of “as open as possible, as closed as necessary”. You can imagine different levels of sharing, expanding outward: within your lab, within your department, within your university, within your scholarly community, and publicly. For most funders across North America and Europe, they see data as a public good with the greatest benefit coming from sharing data to the widest possible audience and encourage publicly sharing data from their funded projects.

Make an action plan or a data management plan

Here are some things to do help you get the ball rolling on sharing data:

  • Get started early and stay organized: document your research anticipating a future user. Check out Center for Open Science’s tools and tips.
  • Deposit your data into a repository (e.g. Zenodo, Figshare.) Many universities have their own repository. Some repositories integrate with github, dropbox, etc. to make it even easier!
  • Get your data a DOI so citations can be tracked (Repositories or your university library can do this for you.)
  • Consider applying a license to your data. Don’t be too restrictive though! You want people to do cool things with your data.
  • Ask for help: Your university likely has someone who can help with local resources. Probably in the library. Look for “Research Data Management”. You might find someone like me!

But I don’t have time to do it!

Aren’t you already creating documentation for yourself? You know, in case someone questions your findings after publication or if reviewer 2 (always reviewer 2 :::shakes fist:::) questions your methods or in a couple months when trying to figure out why you decided to run one analysis over another. Surely, making it intelligible to other people isn’t adding much to your workflow…or your graduate assistant’s workflow? If you incorporate these habits early in the process you’ll cut down the time necessary to prepare data at the end. Also, if you consider how much time you spend planning, collecting, analyzing, writing, and revising, the amount of time it takes to prepare your data and share it is relatively small in the grand scheme of things. And why wouldn’t you want to have another output to share? Matthew Partridge a researcher from University of Southampton and cartoonist at Errant Science has a great comic illustrating this:

Image by Matthew Partridge, Errant Science

In sum, open science and open data is a model for a more transparent and collaborative type of scientific inquiry. One that lives up to the best ideals of science as a community effort all moving towards discovery and innovation. Plus you get a cool new output to list on your CV and track its impact in the world. Not a bad shake if you ask me.

Speaker profile – Fabiola Henri

We are proud to announce that Dr. Fabiola Henri will give one of COLING 2018’s keynote talks.

Fabiola Henri is an Assistant Professor at the University of Kentucky since 2014. She received a Ph.D in Linguistics from the University of Paris Diderot, France in 2010. She is a creolist who primarily focuses on the structure and complexity of morphology in creole languages from the perspective of recent abstractive models, with insights from both information-theoretic and discriminative learning. Her work examines the emergence of creole morphology as proceeding from a complex interplay between sociohistorical context, natural language change, input from the lexifier, substratic influence, unguided second language acquisition, among others. Her main interests lie within French-based creoles, and more specifically Mauritian, a language which she speaks natively. Her publications and various presentations offer empirical and explanatory view of morphological change in French-based creoles, with a view on morphological complexity which starkly contrasts with Exceptionalist theories of creolization.

https://linguistics.as.uky.edu/users/fshe223

Reproducibility in NLP – Guest Post

Being able to reproduce experiments and results is important to advancing our knowledge, but it’s not something we’ve always been able to do well. In a series of guest posts, we have invited perspectives and advice on reproducibility in NLP.

by Liling Tan, Research Scientist at Rakuten Institute of Technology / Universität des Saarlandes.

I think there are at least 3 levels of reproducibility in NLP (i) Rerun, (ii) Repurpose, (iii) Reimplementation.

At the rerun level, the aim is to re-run the open source code on the open dataset shared from the publication. It’s sort of a sanity check that one would do to understand the practicality of the inputs and the expected outputs. Sometimes, this level of replication is usually skipped because (i) either the open data, open source or perhaps the documentation is missing or (ii) we trust the integrity of the researchers and the publication.

The repurpose level often starts out as a low-hanging fruit project. Usually, the goal is to modify the source code slightly to suit other purposes and/or datasets, e.g. if the code was an implementation of SRU to solve an image recognition task, maybe it could work for machine translation. Alternatively, one might also add the results from the previous state-of-the-art (SOTA) as features/inputs to the new approach.

The last reimplementation level is usually overlooked or done out of necessity. For example, an older SOTA might have stale code that doesn’t compile/run any more so it’s easier to reimplement the older SOTA technique into the framework you’ve created for the novel approach than to figure out how to make the stale code run. Often, the re-implementation might take quite some time and effort and in return, it produces that one line of numbers in the table of results.

More often, we see publications simply citing the results of the previous studies for SOTA comparisons on the same dataset instead of reimplementing and incorporating the previous methods into the code for the new methods. This is largely because of how we incentivize “newness” over “reproducibility” in research, but this is getting better as we see “reproducibility” as a reviewing criterion.

We seldom question the comparability of results once a publication has exceeded the SOTA performance on a common benchmark metric and dataset. Without replication, we often overlook the sensitivity of data munging that might be involved before putting the system output through a benchmarking script. For example, the abuse of the infamous multi-bleu.perl evaluation script overlooked the fact that sentences need to be tokenized before computing the n-gram overlaps in BLEU. Even though the script and gold standards were consistent, different system has been tokenizing their outputs differently making comparability of results inconsistent, especially if there’s no open source or clear documentation of the system reported in the publication. To resolve the multi-bleu.perl misuse, replicating a previous SOTA system using the same pre-/post-processing steps would have given a fairer account of the comparability between the previous SOTA and current approach.

Additionally, “simply citing” often undermines the currency of benchmarking datasets. Like software, datasets are constantly updated and patched; moreover new datasets that are more relevant to the current day or latest shared task are created. But we see publications evaluating on dated benchmarks, most probably to draw comparison with a previous SOTA. Hopefully with “reproducibility” as a criterion in reviewing, authors pay more attention to the writing of the paper and share resources such that future work can easily replicate their systems on newer datasets.

The core ingredients of replication studies are open data and open sources.  But lacking in neither shouldn’t hinder reproducibility. If the approaches are well-described in the publication, it shouldn’t be hard to reproduce the results on an open dataset. Without shared resources, open sources, and/or proper documentation, one may question the true impact of the publication that can’t be easily replicated.

Speaker profile – James Pustejovsky

We are proud to announce that Dr. James Pustejovsky will give one of COLING 2018’s keynote talks.

James Pustejovsky is the TJX Feldberg Chair in Computer Science at Brandeis University, where he is also Chair of the Linguistics Program, Chair of the Computational Linguistics MA Program, and Director of the Lab for Linguistics and Computation. He received his B.S. from MIT and his Ph.D. from UMASS at Amherst. He has worked on computational and lexical semantics for twenty five years and is chief developer of Generative Lexicon Theory. He has been committed to developing linguistically expressive lexical data resources for the CL and AI community. Since 2002, he has also been involved in the development of standards and annotated corpora for semantic information in language. Pustejovsky is chief architect of TimeML and ISO-TimeML, a recently adopted ISO standard for temporal information in language, as well as ISO-Space, a specification for spatial information in language.

James Pustejovsky has authored and/or edited numerous books, including Generative Lexicon (MIT, 1995), The Problem of Polysemy (CUP, with B. Boguraev,1996), The Language of Time: A Reader (OUP, with I. Mani and R. Gaizauskas, 2005), Interpreting Motion: Grounded Representations for Spatial Language (OUP, with I. Mani, 2012), and Natural Language Annotation for Machine Learning, O’Reilly, 2012 (with A. Stubbs). Recently, he has been developing a modeling framework for representing linguistic expressions, gestures, and interactions as multimodal simulations. This platform, VoxML/VoxSim, enables real-time communication between humans and computers and robots for joint tasks. Recent books include: Recent Advances in Generative Lexicon Theory, (Springer, 2013); The Handbook of Linguistic Annotation, Springer, 2017 (edited with Nancy Ide), and two textbooks, The Lexicon, Cambridge University Press, 2018 (with O. Batiukova), and A Guide to Generative Lexicon Theory, Oxford University Press, 2019 (with E. Jezek). He is presently finishing a book on temporal information processing for O’Reilly with L. Derczynski and M. Verhagen.