SVSNet+: Enhancing Speaker Voice Similarity Assessment Models with Representations from Speech Foundation Models

Chun Yin, Tai Shih Chi, Yu Tsao, Hsin Min Wang

研究成果同行評審

摘要

Representations from pre-trained speech foundation models (SFMs) have shown impressive performance in many downstream tasks.However, the potential benefits of incorporating pre-trained SFM representations into speaker voice similarity assessment have not been thoroughly investigated.In this paper, we propose SVSNet+, a model that integrates pre-trained SFM representations to improve performance in assessing speaker voice similarity.Experimental results on the Voice Conversion Challenge 2018 and 2020 datasets show that SVSNet+ incorporating WavLM representations shows significant improvements compared to baseline models.In addition, while fine-tuning WavLM with a small dataset of the downstream task does not improve performance, using the same dataset to learn a weighted-sum representation of WavLM can substantially improve performance.Furthermore, when WavLM is replaced by other SFMs, SVSNet+ still outperforms the baseline models and exhibits strong generalization ability.

原文English
頁(從 - 到)1195-1199
頁數5
期刊Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
出版狀態Published - 2024
事件25th Interspeech Conferece 2024 - Kos Island, 希臘
持續時間: 1 9月 20245 9月 2024

指紋

深入研究「SVSNet+: Enhancing Speaker Voice Similarity Assessment Models with Representations from Speech Foundation Models」主題。共同形成了獨特的指紋。

引用此