A CNN-Based Approach to Identification of Degradations in Speech Signals

Yuki Saishu, Amir Hossein Poorjam*, Mads Græsbøll Christensen


Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

46 Downloads (Pure)


The presence of degradations in speech signals, which causes acoustic mismatch between training and operating conditions, deteriorates the performance of many speech-based systems. A variety of enhancement techniques have been developed to compensate the acoustic mismatch in speech-based applications.
To apply these signal enhancement techniques, however, it is necessary to know prior information about the presence and the type of degradations in speech signals.
In this paper, we propose a new convolutional neural network (CNN)-based approach to automatically identify the major types of degradations commonly encountered in speech-based applications, namely additive noise, nonlinear distortion, and reverberation. In this approach, a set of parallel CNNs, each detecting a certain degradation type, is applied to the log-mel spectrogram of audio signals.
Experimental results using two different speech types, namely pathological voice and normal running speech, show the effectiveness of the proposed method in detecting the presence and the type of degradations in speech signals which outperforms the state-of-the-art method.
Using the score weighted class activation mapping, we provide a visual analysis of how the network makes decision for identifying different types of degradation in speech signals by highlighting the regions of the log-mel spectrogram which are more influential to the target degradation.
TidsskriftEurasip Journal on Audio, Speech, and Music Processing
Udgave nummer1
Sider (fra-til)1-10
Antal sider10
StatusUdgivet - 5 feb. 2021


Dyk ned i forskningsemnerne om 'A CNN-Based Approach to Identification of Degradations in Speech Signals'. Sammen danner de et unikt fingeraftryk.