摘要
Electrolarynx is a commonly used assistive device to help patients with removed vocal cords regain their ability to speak. Although the electrolarynx can generate excitation signals like the vocal cords, the naturalness and intelligibility of electrolaryngeal (EL) speech are very different from those of natural (NL) speech. Many deep-learning-based models have been applied to electrolaryngeal speech voice conversion (ELVC) for converting EL speech to NL speech. In this study, we propose a multimodal voice conversion (VC) model that integrates acoustic and visual information into a unified network. We compared different pre-trained models as visual feature extractors and evaluated the effectiveness of these features in the ELVC task. The experimental results demonstrate that the proposed multimodal VC model outperforms single-modal models in both objective and subjective metrics, suggesting that the integration of visual information can significantly improve the quality of ELVC.
原文 | English |
---|---|
頁(從 - 到) | 5023-5026 |
頁數 | 4 |
期刊 | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
卷 | 2023-August |
DOIs | |
出版狀態 | Published - 2023 |
事件 | 24th International Speech Communication Association, Interspeech 2023 - Dublin, Ireland 持續時間: 20 8月 2023 → 24 8月 2023 |