Emotion-based Music Rretrieval on a Well-reduced Audio Feature Space

Maria Magdalena Ruxanda, Bee Yong Chua, Alexandros Nanopoulos, Christian Søndergaard Jensen

Research output: Contribution to journalConference article in JournalResearchpeer-review

13 Citations (Scopus)

Abstract

Music expresses emotion. A number of audio extracted features have influence on the perceived emotional expression of music. These audio features generate a high-dimensional space, on which music similarity retrieval can be performed effectively, with respect to human perception of the music-emotion. However, the real-time systems that retrieve music over large music databases, can achieve order of magnitude performance increase, if applying multidimensional indexing over a dimensionally reduced audio feature space. To meet this performance achievement, in this paper, extensive studies are conducted on a number of dimensionality reduction algorithms, including both classic and novel approaches. The paper clearly envisages which dimensionality reduction techniques on the considered audio feature space, can preserve in average the accuracy of the emotion-based music retrieval.
Original languageEnglish
JournalProceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing
Pages (from-to)181-184
ISSN1520-6149
DOIs
Publication statusPublished - 2009
EventIEEE International Conference on Acoustics, Speech, and Signal Processing - Taipei, Taiwan, Province of China
Duration: 19 Apr 200924 Apr 2009

Conference

ConferenceIEEE International Conference on Acoustics, Speech, and Signal Processing
Country/TerritoryTaiwan, Province of China
CityTaipei
Period19/04/200924/04/2009

Fingerprint

Dive into the research topics of 'Emotion-based Music Rretrieval on a Well-reduced Audio Feature Space'. Together they form a unique fingerprint.

Cite this