1 November 2023
The ability for NLP models to generalize well is one of the main desiderata of current NLP research. However, there is currently no consensus as to what ‘good generalization’ entails and how it should be evaluated. The rough definition is the ability to successfully transfer representations, knowledge and strategies from past experience to new experiences. So it can for instance be taken to mean that a model is able to apply predictions based on a certain data set to a new data set in a robust, reliable and fair way. But different researchers use different definitions. Also, there are currently no common standards to evaluate generalization. As a consequence, newly proposed NLP models are usually not systematically tested for their ability to generalize.
To help overcome this problem, an international team of researchers, including multiple researchers from the University of Amsterdam’s Institute for Logic, Language and Computation (ILLC), has now published an Analysis in Nature Machine Intelligence. In the paper, they present a taxonomy for characterizing and understanding generalization research in NLP. The publication is the first result of the larger project GenBench, led by UvA-ILLC alumna Dieuwke Hupkes.
Lead author Mario Giulianelli (UvA-ILLC) comments: ‘The taxonomy we propose in our Analysis is based on an extensive literature review. We identified five axes along which generalization studies can differ: their main motivation, the type of generalization they aim to solve, the type of data shift they consider, the source by which this data shift originated, and the locus of the shift within the modern NLP modelling pipeline. Next, we used our taxonomy to classify over 700 experiments. We used these results to present an in-depth analysis that maps out the current state of generalization research in NLP and we make recommendations for which areas deserve attention in the future.’
NLP researchers interested in the topic of generalization can also visit the GenBench website. The website offers multiple tools for those interested in exploring and better understanding generalization studies, including an evolving survey, visualization tools and, soon, a generalization leaderboard. The first GenBench workshop will take place at the EMNLP 2023 conference, on December 6th.