Reducing Domain Mismatch by Maximum Mean Discrepancy Based Autoencoders

Weiwei Lin, Man Wai Mak, Longxin Li, Jen Tzung Chien

研究成果: Paper同行評審

20 引文 斯高帕斯(Scopus)


Domain mismatch, caused by the discrepancy between training and test data, can severely degrade the performance of speaker verification (SV) systems. What’s more, both training and test data themselves could be composed of heterogeneous subsets, with each subset corresponding to one sub-domain. These multi-source mismatches can further degrade SV performance. This paper proposes incorporating maximum mean discrepancy (MMD) into the loss function of autoencoders to reduce theses mismatches. Specifically, we generalize MMD to measure the discrepancies among multiple distributions. We call this generalized MMD domain-wise MMD. Using domain-wise MMD as an objective function, we derive a domain-invariant autoencoder (DAE) for multi-source i-vector adaptation. The DAE directly encodes the features that minimize the multi-source mismatch. By replacing the original i-vectors with these domain-invariant feature vectors for PLDA training, we reduce the EER by 11.8% in NIST 2016 SRE when compared to PLDA without adaptation.

出版狀態Published - 2018
事件2018 Speaker and Language Recognition Workshop, ODYSSEY 2018 - Les Sables d'Olonne, France
持續時間: 26 6月 201829 6月 2018


Conference2018 Speaker and Language Recognition Workshop, ODYSSEY 2018
城市Les Sables d'Olonne