Contrastive Self-Supervised Speaker Embedding With Sequential Disentanglement

Youzhi Tu, Man Wai Mak*, Jen Tzung Chien

*此作品的通信作者

研究成果: Article同行評審

摘要

Contrastive self-supervised learning has been widely used in speaker embedding to address the labeling challenge. Contrastive speaker embedding assumes that the contrast between the positive and negative pairs of speech segments is attributed to speaker identity only. However, this assumption is incorrect because speech signals contain not only speaker identity but also linguistic content. In this paper, we propose a contrastive learning framework with sequential disentanglement to remove linguistic content by incorporating a disentangled sequential variational autoencoder (DSVAE) into the conventional contrastive learning framework. The DSVAE aims to disentangle speaker factors from content factors in an embedding space so that the speaker factors become the main contributor to the contrastive loss. Because content factors have been removed from contrastive learning, the resulting speaker embeddings will be content-invariant. The learned embeddings are also robust to language mismatch. It is shown that the proposed method consistently outperforms the conventional contrastive speaker embedding on the VoxCeleb1 and CN-Celeb datasets. This finding suggests that applying sequential disentanglement is beneficial to learning speaker-discriminative embeddings.

原文English
頁(從 - 到)2704-2715
頁數12
期刊IEEE/ACM Transactions on Audio Speech and Language Processing
32
DOIs
出版狀態Published - 2024

指紋

深入研究「Contrastive Self-Supervised Speaker Embedding With Sequential Disentanglement」主題。共同形成了獨特的指紋。

引用此