Projekter pr. år
Abstract
In recent years deep neural networks (DNNs) have become a popular choice for audio content analysis. This may be attributed to various factors including advancements in training algorithms, computational power, and the potential for DNNs to implicitly learn a set of feature detectors. We have recently re-examined two works \cite{sigtiaimproved}\cite{hamel2010learning} that consider DNNs for the task of music genre recognition (MGR). These papers conclude that frame-level features learned by DNNs offer an improvement over traditional, hand-crafted features such as Mel-frequency cepstrum coefficients (MFCCs). However, these conclusions were drawn based on training/testing using the GTZAN dataset, which is now known to contain several flaws including replicated observations and artists \cite{sturm2012analysis}. We illustrate how considering these flaws dramatically changes the results, which leads one to question the degree to which the learned frame-level features are actually useful for MGR. We make available a reproducible software package allowing other researchers to completely duplicate our figures and results.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 2015 |
Status | Udgivet - 2015 |
Begivenhed | Digital Music Research Network 9 - Queen Mary University of London, London, Storbritannien Varighed: 16 dec. 2014 → 16 dec. 2014 |
Workshop
Workshop | Digital Music Research Network 9 |
---|---|
Lokation | Queen Mary University of London |
Land/Område | Storbritannien |
By | London |
Periode | 16/12/2014 → 16/12/2014 |
Fingeraftryk
Dyk ned i forskningsemnerne om 'Are deep neural networks really learning relevant features?'. Sammen danner de et unikt fingeraftryk.Projekter
- 1 Afsluttet
-
CoSound
Christensen, M. G., Tan, Z., Jensen, S. H. & Sturm, B. L.
01/01/2012 → 31/12/2015
Projekter: Projekt › Forskning