Wikipedia is extremely useful as a work of reference, but how do you know whether its content is accurate? UvA researcher Xinyi Li has developed a method to automatically assess the quality of Wikipedia pages. He recently presented his model at the European Conference on Information Retrieval in Vienna.
Wikipedia is the world’s largest and most referred to encyclopedia. Articles can be written or revised by anyone. This has spurred the rapid growth of the online encyclopedia. The upshot of this, however, is that the quality of the articles cannot be systematically guaranteed. Out of all Wikipedia articles, only a small number are manually evaluated on their quality. To determine the quality of all the other pages, automatic assessment methods are needed.
Current methods for automatically assessing quality are based on the content of the article. Besides content, Li’s method also looks at who contributed to an article. Most articles are written by various people, but not everyone contributes to the same extent. It also appears that the majority of authors only write about a limited number of topics.
By also focusing on the expertise and number of contributions alongside content, Li’s method is better placed to assess the quality of an article than methods which merely focus on content.
In theory, Li’s software could be used to issue automated warnings about ‘inferior quality’. At the moment, warnings are placed in articles published on Wikipedia that do not contain references, have little content, or aren’t written in an objective manner.
Li X, Tang J, Wang T, Luo Z, de Rijke M. 2015. ‘Automatically assessing Wikipedia article quality by exploiting article-editor networks.’ ECIR 2015: 37th European Conference on Information Retrieval.