Genre-adaptive Semantic Computing and Audio-based Modelling for Music Mood Annotation

Pasi Saari, György Fazekas, Tuomas Eerola, Mathieu Barthet, Olivier Lartillot, Mark Sandler

Research output: Contribution to journalJournal articleResearchpeer-review

24 Citations (Scopus)

Abstract

This study investigates whether taking genre into account is beneficial for automatic music mood annotation in terms of core affects valence, arousal, and tension, as well as several other mood scales. Novel techniques employing genre-adaptive semantic computing and audio-based modelling are proposed. A technique called the ACTwg employs genre-adaptive semantic computing of mood-related social tags, whereas ACTwg-SLPwg combines semantic computing and audio-based modelling, both in a genre-adaptive manner. The proposed techniques are experimentally evaluated at predicting listener ratings related to a set of 600 popular music tracks spanning multiple genres. The results show that ACTwg outperforms a semantic computing technique that does not exploit genre information, and ACTwg-SLPwg outperforms conventional techniques and other genre-adaptive alternatives. In particular, improvements in the prediction rates are obtained for the valence dimension which is typically the most challenging core affect dimension for audio-based annotation. The specificity of genre categories is not crucial for the performance of ACTwg-SLPwg. The study also presents analytical insights into inferring a concise tag-based genre representation for genre-adaptive music mood analysis.
Original languageEnglish
JournalIEEE Transactions on Affective Computing
Volume7
Issue number2
Pages (from-to)122-135
ISSN2371-9850
DOIs
Publication statusPublished - 1 Apr 2016

Keywords

  • Music information retrieval
  • mood prediction
  • social tags
  • semantic computing
  • music genre
  • genre-adaptive

Fingerprint

Dive into the research topics of 'Genre-adaptive Semantic Computing and Audio-based Modelling for Music Mood Annotation'. Together they form a unique fingerprint.

Cite this