Abstract
Audio and visual modalities are inherently connected in speech signals: lip movements and facial expressions are correlated with speech sounds. This motivates studies that incorporate the visual modality to enhance an acoustic speech signal or even restore missing audio information. Specifically, this paper focuses on the problem of audio-visual speech inpainting, which is the task of synthesizing the speech in a corrupted audio segment in a way that it is consistent with the corresponding visual content and the uncorrupted audio context. We present an audio-visual transformer-based deep learning model that leverages visual cues that provide information about the content of the corrupted audio. It outperforms the previous state-of-the-art audio-visual model and audio-only baselines. We also show how visual features extracted with AV-HuBERT, a large audiovisual transformer for speech recognition, are suitable for synthesizing speech.
Original language | English |
---|---|
Title of host publication | Proc. INTERSPEECH 2023 |
Number of pages | 5 |
Publisher | ISCA |
Publication date | 2023 |
Pages | 4459-4463 |
DOIs | |
Publication status | Published - 2023 |
Event | 24th International Speech Communication Association, Interspeech 2023 - Dublin, Ireland Duration: 20 Aug 2023 → 24 Aug 2023 |
Conference
Conference | 24th International Speech Communication Association, Interspeech 2023 |
---|---|
Country/Territory | Ireland |
City | Dublin |
Period | 20/08/2023 → 24/08/2023 |
Sponsor | Amazon Science, Apple, Dataocean AI, et al., Google Research, Meta AI |
Series | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
---|---|
ISSN | 2308-457X |
Bibliographical note
Publisher Copyright:© 2023 International Speech Communication Association. All rights reserved.
Keywords
- audio-visual
- deep learning
- inpainting
- multimodal
- speech
- transformer