On Automatic Music Genre Recognition by Sparse Representation Classification using Auditory Temporal Modulations

Bob L. Sturm, Pardis Noorzad

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

4142 Downloads (Pure)

Abstract

A recent system combining sparse representation classification (SRC)
and a perceptually-based acoustic feature (ATM)
\cite{Panagakis2009,Panagakis2009b,Panagakis2010c},
outperforms by a significant margin the state of the art in music genre recognition, e.g., \cite{Bergstra2006}.
With genre so difficult to define,
and seemingly based on factors more broad than acoustics,
this remarkable result motivates investigation into, among other things,
why it works and what it means for how humans organize music.
In this paper, we review the application of SRC and ATM to recognizing genre,
and attempt to reproduce the results of \cite{Panagakis2009}.
First, we find that classification results
are consistent for features extracted from different analyses.
Second, we find that SRC accuracy improves
when we pose the sparse representation problem
with inequality constraints.
Finally, we find that only when we reduce the number of classes by half
do we see the high accuracies reported in \cite{Panagakis2009}.
Original languageEnglish
Title of host publicationProceedings of the 9th International Symposium on Computer Music Modeling and Retrieval
Place of PublicationLondon
Publication date2012
Pages379-394
Publication statusPublished - 2012
EventComputer music modeling and retrieval - London, United Kingdom
Duration: 19 Jun 201222 Jun 2012

Conference

ConferenceComputer music modeling and retrieval
Country/TerritoryUnited Kingdom
CityLondon
Period19/06/201222/06/2012

Fingerprint

Dive into the research topics of 'On Automatic Music Genre Recognition by Sparse Representation Classification using Auditory Temporal Modulations'. Together they form a unique fingerprint.

Cite this