摘要
Language modeling plays a critical role for automatic speech recognition. Conventionally, the n-gram language models suffer from lacking good representation of historical words and estimating unseen parameters from insufficient training data. In this work, the latent semantic information is explored for language modeling and parameter smoothing. In language modeling, we present a new representation of historical words via retrieving the most likely relevance document. Besides, we also develop a novel parameter smoothing method where the language models of seen and unseen words are estimated by interpolating those of k nearest seen words in training corpus. The interpolation coefficients are determined according to the closeness of words in semantic space. In the experiments, the proposed modeling and smoothing methods can significantly reduce the perplexities of language models with moderate computation cost.
原文 | English |
---|---|
頁面 | 1373-1376 |
頁數 | 4 |
出版狀態 | Published - 10月 2004 |
事件 | 8th International Conference on Spoken Language Processing, ICSLP 2004 - Jeju, Jeju Island, Korea, Republic of 持續時間: 4 10月 2004 → 8 10月 2004 |
Conference
Conference | 8th International Conference on Spoken Language Processing, ICSLP 2004 |
---|---|
國家/地區 | Korea, Republic of |
城市 | Jeju, Jeju Island |
期間 | 4/10/04 → 8/10/04 |