On the Assessment of Expertise Profiles

Richard Berends, Maarten De Rijke, Krisztian Balog, Toine Bogers, Antal Van den Bosch

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

37 Citationer (Scopus)
1081 Downloads (Pure)

Abstract

Expertise retrieval has attracted significant interest in the field of information retrieval. Expert finding has been studied extensively, with less attention going to the complementary task of expert profiling, that is, automatically identifying topics about which a person is knowledgeable. We describe a test collection for expert profiling in which expert users have self-selected their knowledge areas. Motivated by the sparseness of this set of knowledge areas, we report on an assessment experiment in which academic experts judge a profile that has been automatically generated by state-of-the-art expert-profiling algorithms; optionally, experts can indicate a level of expertise for relevant areas. Experts may also give feedback on the quality of the system-generated knowledge areas. We report on a content analysis of these comments and gain insights into what aspects of profiles matter to experts. We provide an error analysis of the system-generated profiles, identifying factors that help explain why certain experts may be harder to profile than others. We also analyze the impact on evaluating expert-profiling systems of using self-selected versus judged system-generated knowledge areas as ground truth; they rank systems somewhat differently but detect about the same amount of pairwise significant differences despite the fact that the judged system-generated assessments are more sparse.
OriginalsprogEngelsk
TidsskriftJournal of American Society for Information Science
Vol/bind64
Udgave nummer10
Sider (fra-til)2024-2044
Antal sider21
ISSN2330-1635
DOI
StatusUdgivet - okt. 2013
Udgivet eksterntJa

Fingeraftryk

Dyk ned i forskningsemnerne om 'On the Assessment of Expertise Profiles'. Sammen danner de et unikt fingeraftryk.

Citationsformater