A framework for evaluating automatic indexing or classification in the context of retrieval

Korajlka Golub, Dagobert Soergel, George Buchanan, Doug Tudhope, Marianne Lykke, Debra Hiom

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

31 Citationer (Scopus)

Abstract

Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. While some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The paper reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single “gold standard” method when evaluating indexing and retrieval and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on indexing, classification and approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard; evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.
OriginalsprogEngelsk
TidsskriftAmerican Society for Information Science and Technology. Journal
Vol/bind67
Udgave nummer1
Sider (fra-til)3-16
Antal sider14
ISSN2330-1635
DOI
StatusUdgivet - 2016

Citationsformater