摘要
Domain mismatch, caused by the discrepancy between training and test data, can severely degrade the performance of speaker verification (SV) systems. What’s more, both training and test data themselves could be composed of heterogeneous subsets, with each subset corresponding to one sub-domain. These multi-source mismatches can further degrade SV performance. This paper proposes incorporating maximum mean discrepancy (MMD) into the loss function of autoencoders to reduce theses mismatches. Specifically, we generalize MMD to measure the discrepancies among multiple distributions. We call this generalized MMD domain-wise MMD. Using domain-wise MMD as an objective function, we derive a domain-invariant autoencoder (DAE) for multi-source i-vector adaptation. The DAE directly encodes the features that minimize the multi-source mismatch. By replacing the original i-vectors with these domain-invariant feature vectors for PLDA training, we reduce the EER by 11.8% in NIST 2016 SRE when compared to PLDA without adaptation.
原文 | English |
---|---|
頁面 | 162-167 |
頁數 | 6 |
DOIs | |
出版狀態 | Published - 2018 |
事件 | 2018 Speaker and Language Recognition Workshop, ODYSSEY 2018 - Les Sables d'Olonne, 法國 持續時間: 26 6月 2018 → 29 6月 2018 |
Conference
Conference | 2018 Speaker and Language Recognition Workshop, ODYSSEY 2018 |
---|---|
國家/地區 | 法國 |
城市 | Les Sables d'Olonne |
期間 | 26/06/18 → 29/06/18 |