Real-time source separation has become increasingly important, as more and more applications, such as voice recognition and voice commands, require clean audio input in noisy environments. Recent developments in deep learning have allowed models to directly exploit the waveform of the audio, making real-time separation achievable. In this paper, we propose a 1-D convolutional U-Net structure to separate waveform input. This structure incorporates recurrent layers, to exploit longer temporal connections in the audio signal. Our proposed network architecture is also benefiting from the addition of an extra output channel, measuring the distortion of the other output channels. Our proposed methodology is experimentally shown to yield state-of-the-art results, using only 0.76 seconds of input audio.
|Konference||2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020|
|Periode||04/05/2020 → 08/05/2020|
|Sponsor||The Institute of Electrical and Electronics Engineers, Signal Processing Society|
|Navn||ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings|