Description
I critically address the "crisis of evaluation" in music information retrieval (MIR), with particular emphasis paid to music genre recognition, music mood recognition, and autotagging. I demonstrate four things: 1) many published results unknowingly use datasets with faults that render them meaningless; 2) state-of-the-art ("high classification accuracy") systems are fooled by irrelevant factors; 3) most published results are based upon an invalid evaluation design; and 4) a lot of work has unknowingly built, tuned, tested, compared and advertised "horses" instead of solutions. (The example of the horse Clever Hans provides an appropriate illustration.) I argue these problems occur because: 1) many researchers assume a dataset is a good dataset because many others use it; 2) many researchers assume evaluation that is standard in machine learning or information retrieval are useful and relevant for MIR; 3) many researchers mistake systematic, rigorous, and standardized evaluation for being scientific evaluation; and 4) problems and success criteria remain ill-defined, and thus evaluation poor, because researchers do not define appropriate use cases. I show how this "crisis of evaluation" can be addressed by formalizing evaluation in MIR to make clear its aims, parts, design, execution, interpretation, and assumptions. I also present several alternative evaluation approaches that can separate horses from solutions.Period | 13 Nov 2013 |
---|---|
Held at | Unknown external organisation |
Documents & Links
Related content
-
Publications
-
Two Systems for Automatic Music Genre Recognition: What Are They Really Recognizing?
Research output: Contribution to book/anthology/report/conference proceeding › Article in proceeding › Research › peer-review
-
Formalizing Evaluation in Music Information Retrieval: A Look at the MIREX Automatic Mood Classification Task
Research output: Contribution to book/anthology/report/conference proceeding › Article in proceeding › Research › peer-review
-
A Survey of Evaluation in Music Genre Recognition
Research output: Contribution to journal › Conference article in Journal › Research › peer-review
-
Evaluating music emotion recognition: Lessons from music genre recognition?
Research output: Contribution to journal › Conference article in Journal › Research › peer-review
-
An Analysis of the GTZAN Music Genre Dataset
Research output: Contribution to book/anthology/report/conference proceeding › Article in proceeding › Research › peer-review
-
Classification Accuracy Is Not Enough: On the Evaluation of Music Genre Recognition Systems
Research output: Contribution to journal › Journal article › Research › peer-review
-
Projects