UnImplicit: The First Workshop on
Understanding Implicit and Underspecified Language

Workshop to be held online in conjunction with ACL-IJCNLP 2021




Recent developments in NLP have led to excellent performance on various semantic tasks. However, an important question that remains open is whether such methods are actually capable of modeling how linguistic meaning is shaped and influenced by context, or if they simply learn superficial patterns that reflect only explicitly stated aspects of meaning. An interesting case in point is the interpretation and understanding of implicit or underspecified language.

More concretely, language utterances may contain empty or fuzzy elements, such as the following: units of measurement, as in "she is 30" vs. "it costs 30" (30 what?), bridges and other missing links, as in "she tried to enter the car, but the door was stuck" (the door of what?), implicit semantic roles, as in "I met her while driving" (who was driving?), and various sorts of gradable phenomena; is a "small elephant" smaller than a "big bee"? Where is the boundary between "orange" and "red"?

Implicit and underspecified phenomena have been studied in linguistics and philosophy for decades (Sag, 1976; Heim, 1982; Ballmer and Pinkal, 1983), but empirical studies in NLP are scarce and far between. The number of datasets and task proposals is however growing (Roesiger et al., 2018; Elazar and Goldberg, 2019; Ebner et al., 2020; McMahan and Stone, 2020) and recent studies have shown the difficulty of annotating and modeling implicit and underspecified phenomena (Shwartz and Dagan, 2016; Scholman and Demberg, 2017; Webber et al., 2019).

The use of implicit and underspecified terms poses serious challenges to standard natural language processing models, and they often require incorporating greater context, using symbolic inference and common-sense reasoning, or more generally, going beyond strictly lexical and compositional meaning constructs. This challenge spans all phases of the NLP model's life cycle: from collecting and annotating relevant data, through devising computational methods for modelling such phenomena, to evaluating and designing proper evaluation metrics.

Furthermore, most existing efforts in NLP are concerned with one particular problem, their benchmarks are narrow in size and scope, and no common platform or standards exist for studying effects on downstream tasks. In our opinion, interpreting implicit and underspecified language is an inherent part of natural language understanding, these elements are essential for human-like interpretation, and modeling them may be critical for downstream applications.

The goal of this workshop is to bring together theoreticians and practitioners from the entire NLP cycle, from annotation and benchmarking to modeling and applications, and to provide an umbrella for the development, discussion and standardization of the study of understanding implicit and underspecified language. We solicit papers on the following, and other, topics:

Martha Palmer

University of Colorado at Boulder
Title: TBA

Chris Potts

Stanford University
Title: TBA

As part of the workshop, we are organizing a shared task on implicit and underspecified language. The focus of this task is on modeling the necessity of clarifications due to aspects of meaning that are implicit or underspecified in context. Specifically, the task setting follows the recent proposal of predicting revision requirements in collaboratively edited instructions (Bhat et al., 2020). The data consists of instances from wikiHowToImprove (Anthonio et al., 2020) in which a revision resolved an implicit or underspecified linguist element. The following revision types are part of the data:

Final training and development sets are available here:

Access to the test data requires registration as a participant. If you are interested in participating in the shared task, please send an email to Michael Roth.

We invite both long (8 pages) and short (4 page) papers. The limits refer to the content and any number of additional pages for references are allowed. The papers should follow the ACL-IJCNLP 2021 formatting instructions (see https://2021.aclweb.org/calls/papers/).

Each submission must be anonymized, written in English, and contain a title and abstract. We specifically encourage papers that address the following themes, for a single phenomenon or a set of phenomena:

To encourage discussion and community building and to bootstrap potential collaborations, we elicit, in addition to shared task papers and regular "archival" track papers, also non-archival submissions. These can take two forms:

These works will be reviewed for topical fit and accepted submissions will be presented as posters (in gather.town or similar interface). Depending on the final workshop program, selected works may be presented in panels. We plan for these to be an opportunity for researchers to present and discuss their work with the relevant community.

Please submit your papers at https://www.softconf.com/acl2021/w19_UnImplicit/

Important Dates

All deadlines are 11:59PM UTC-12:00 ("anywhere on Earth").

Program Committee