An Xception Residual Recurrent Neural Network for Audio Event Detection and Tagging

Tomas Gajarsky, Hendrik Purwins

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

86 Downloads (Pure)

Abstract

Audio tagging (AT) refers to automatically identifying whether a particular sound event is contained in a given audio segment. Sound event detection (SED) requires a system to further determine the time, when exactly an audio event occurs within the audio segment. Task 4 in the DCASE 2017 competition required to solve both tasks automatically based on a set of 17 sounds (horn, siren, car, bicycle, etc.) relevant for smart cars, a subset of the weakly-labeled dataset called the AudioSet. We propose the Xception - Stacked Residual Recurrent Neural Network (XRRNN), based on modifications of the system CVSSP by Xu et al. (2017), that won the challenge for the AT task. The processing stages of the XRRNN consists of 1) an Xception module as front-end, 2) a 1 x 1 convolution, 3) a set of stacked residual recurrent neural networks, and 4) a feed-forward layer with attention. Using log-Mel spectra and MFCCs as input features and a fusion of the posteriors of trained networks with those input features, we yield the following results through a set of Bonferroni-corrected t-tests using 30 models for each configuration: For AT, XRRNN significantly outperforms the CVSSP system with a 1.3% improvement (p = 0.0323) in F-score (XRNN-logMel vs CVSSP-fusion). For SED, for all three input feature combinations, XRRNN significantly reduces the error rate by 4.5% on average (average p = 1.06 · 1010).
Original languageEnglish
Title of host publicationProceedings of the 15th Sound and Music Computing Conference (SMC2018)
PublisherSound and Music Computing Network
Publication date2018
Pages210-216
ISBN (Electronic)978-9963-697-30-4
DOIs
Publication statusPublished - 2018
Event15th International Sound & Music Computing Conference - Limassol, Cyprus
Duration: 4 Jul 2018 → …

Conference

Conference15th International Sound & Music Computing Conference
CountryCyprus
CityLimassol
Period04/07/2018 → …
SeriesProceedings of the Sound and Music Computing Conference
ISSN2518-3672

Fingerprint

Recurrent neural networks
Acoustic waves
Sirens
Railroad cars
Fusion reactions
Bicycles
Convolution
Processing

Cite this

Gajarsky, T., & Purwins, H. (2018). An Xception Residual Recurrent Neural Network for Audio Event Detection and Tagging. In Proceedings of the 15th Sound and Music Computing Conference (SMC2018) (pp. 210-216). Sound and Music Computing Network. Proceedings of the Sound and Music Computing Conference https://doi.org/10.5281/zenodo.1422563
Gajarsky, Tomas ; Purwins, Hendrik. / An Xception Residual Recurrent Neural Network for Audio Event Detection and Tagging. Proceedings of the 15th Sound and Music Computing Conference (SMC2018). Sound and Music Computing Network, 2018. pp. 210-216 (Proceedings of the Sound and Music Computing Conference).
@inproceedings{abc6024502fc44f99e4a98c7cf25f38f,
title = "An Xception Residual Recurrent Neural Network for Audio Event Detection and Tagging",
abstract = "Audio tagging (AT) refers to automatically identifying whether a particular sound event is contained in a given audio segment. Sound event detection (SED) requires a system to further determine the time, when exactly an audio event occurs within the audio segment. Task 4 in the DCASE 2017 competition required to solve both tasks automatically based on a set of 17 sounds (horn, siren, car, bicycle, etc.) relevant for smart cars, a subset of the weakly-labeled dataset called the AudioSet. We propose the Xception - Stacked Residual Recurrent Neural Network (XRRNN), based on modifications of the system CVSSP by Xu et al. (2017), that won the challenge for the AT task. The processing stages of the XRRNN consists of 1) an Xception module as front-end, 2) a 1 x 1 convolution, 3) a set of stacked residual recurrent neural networks, and 4) a feed-forward layer with attention. Using log-Mel spectra and MFCCs as input features and a fusion of the posteriors of trained networks with those input features, we yield the following results through a set of Bonferroni-corrected t-tests using 30 models for each configuration: For AT, XRRNN significantly outperforms the CVSSP system with a 1.3{\%} improvement (p = 0.0323) in F-score (XRNN-logMel vs CVSSP-fusion). For SED, for all three input feature combinations, XRRNN significantly reduces the error rate by 4.5{\%} on average (average p = 1.06 · 1010).",
author = "Tomas Gajarsky and Hendrik Purwins",
year = "2018",
doi = "10.5281/zenodo.1422563",
language = "English",
pages = "210--216",
booktitle = "Proceedings of the 15th Sound and Music Computing Conference (SMC2018)",
publisher = "Sound and Music Computing Network",

}

Gajarsky, T & Purwins, H 2018, An Xception Residual Recurrent Neural Network for Audio Event Detection and Tagging. in Proceedings of the 15th Sound and Music Computing Conference (SMC2018). Sound and Music Computing Network, Proceedings of the Sound and Music Computing Conference, pp. 210-216, 15th International Sound & Music Computing Conference, Limassol, Cyprus, 04/07/2018. https://doi.org/10.5281/zenodo.1422563

An Xception Residual Recurrent Neural Network for Audio Event Detection and Tagging. / Gajarsky, Tomas; Purwins, Hendrik.

Proceedings of the 15th Sound and Music Computing Conference (SMC2018). Sound and Music Computing Network, 2018. p. 210-216.

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

TY - GEN

T1 - An Xception Residual Recurrent Neural Network for Audio Event Detection and Tagging

AU - Gajarsky, Tomas

AU - Purwins, Hendrik

PY - 2018

Y1 - 2018

N2 - Audio tagging (AT) refers to automatically identifying whether a particular sound event is contained in a given audio segment. Sound event detection (SED) requires a system to further determine the time, when exactly an audio event occurs within the audio segment. Task 4 in the DCASE 2017 competition required to solve both tasks automatically based on a set of 17 sounds (horn, siren, car, bicycle, etc.) relevant for smart cars, a subset of the weakly-labeled dataset called the AudioSet. We propose the Xception - Stacked Residual Recurrent Neural Network (XRRNN), based on modifications of the system CVSSP by Xu et al. (2017), that won the challenge for the AT task. The processing stages of the XRRNN consists of 1) an Xception module as front-end, 2) a 1 x 1 convolution, 3) a set of stacked residual recurrent neural networks, and 4) a feed-forward layer with attention. Using log-Mel spectra and MFCCs as input features and a fusion of the posteriors of trained networks with those input features, we yield the following results through a set of Bonferroni-corrected t-tests using 30 models for each configuration: For AT, XRRNN significantly outperforms the CVSSP system with a 1.3% improvement (p = 0.0323) in F-score (XRNN-logMel vs CVSSP-fusion). For SED, for all three input feature combinations, XRRNN significantly reduces the error rate by 4.5% on average (average p = 1.06 · 1010).

AB - Audio tagging (AT) refers to automatically identifying whether a particular sound event is contained in a given audio segment. Sound event detection (SED) requires a system to further determine the time, when exactly an audio event occurs within the audio segment. Task 4 in the DCASE 2017 competition required to solve both tasks automatically based on a set of 17 sounds (horn, siren, car, bicycle, etc.) relevant for smart cars, a subset of the weakly-labeled dataset called the AudioSet. We propose the Xception - Stacked Residual Recurrent Neural Network (XRRNN), based on modifications of the system CVSSP by Xu et al. (2017), that won the challenge for the AT task. The processing stages of the XRRNN consists of 1) an Xception module as front-end, 2) a 1 x 1 convolution, 3) a set of stacked residual recurrent neural networks, and 4) a feed-forward layer with attention. Using log-Mel spectra and MFCCs as input features and a fusion of the posteriors of trained networks with those input features, we yield the following results through a set of Bonferroni-corrected t-tests using 30 models for each configuration: For AT, XRRNN significantly outperforms the CVSSP system with a 1.3% improvement (p = 0.0323) in F-score (XRNN-logMel vs CVSSP-fusion). For SED, for all three input feature combinations, XRRNN significantly reduces the error rate by 4.5% on average (average p = 1.06 · 1010).

U2 - 10.5281/zenodo.1422563

DO - 10.5281/zenodo.1422563

M3 - Article in proceeding

SP - 210

EP - 216

BT - Proceedings of the 15th Sound and Music Computing Conference (SMC2018)

PB - Sound and Music Computing Network

ER -

Gajarsky T, Purwins H. An Xception Residual Recurrent Neural Network for Audio Event Detection and Tagging. In Proceedings of the 15th Sound and Music Computing Conference (SMC2018). Sound and Music Computing Network. 2018. p. 210-216. (Proceedings of the Sound and Music Computing Conference). https://doi.org/10.5281/zenodo.1422563