Abstract
Evaluation initiatives have been widely credited with contributing highly to the development and advancement of information access systems, by providing a sustainable platform for conducting the very demanding activity of comparable experimental evaluation in a large scale. Measuring the impact of such benchmarking activities is crucial for assessing which of their aspects have been successful, which activities should be continued, enforced or suspended and which research paths should be further pursued in the future. This work introduces a framework for modeling the data produced by evaluation campaigns, a methodology for measuring their scholarly impact, and tools exploiting visual analytics to analyze the outcomes.
Original language | English |
---|---|
Journal | Procedia Computer Science |
Volume | 38 |
Pages (from-to) | 133-137 |
Number of pages | 5 |
ISSN | 1877-0509 |
DOIs | |
Publication status | Published - 2014 |
Event | Italian Research Conference on Digital Libraries - Padua, Italy Duration: 30 Jan 2014 → 31 Jan 2014 Conference number: 10 |
Conference
Conference | Italian Research Conference on Digital Libraries |
---|---|
Number | 10 |
Country/Territory | Italy |
City | Padua |
Period | 30/01/2014 → 31/01/2014 |
Keywords
- Scholarly Impact
- Experimental Evaluation
- Experimental Data
- Visual Analytics