Discriminative maximum entropy language model for speech recognition

Chuang Hua Chueh*, To Chang Chien, Jen-Tzung Chien

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

4 Scopus citations

Abstract

This paper presents a new discriminative language model based on the whole-sentence maximum entropy (ME) framework. In the proposed discriminative ME (DME) model, we exploit an integrated linguistic and acoustic model, which properly incorporates the features from n-gram model and acoustic log likelihoods of target and competing models. Through the constrained optimization of integrated model, we estimate DME language model for speech recognition. Attractively, we illustrate the relation between DME estimation and the maximum mutual information (MMI) estimation for language modeling. It is interesting to find that using the sentence-level log likelihood ratios of competing and target sentences as the acoustic features for ME language modeling is equivalent to performing MMI discriminative language modeling. In the experiments on speech recognition, we show that DME model achieved lower word error rate compared to conventional ME model.

Original languageEnglish
Pages721-724
Number of pages4
StatePublished - Sep 2005
Event9th European Conference on Speech Communication and Technology - Lisbon, Portugal
Duration: 4 Sep 20058 Sep 2005

Conference

Conference9th European Conference on Speech Communication and Technology
Country/TerritoryPortugal
CityLisbon
Period4/09/058/09/05

Fingerprint

Dive into the research topics of 'Discriminative maximum entropy language model for speech recognition'. Together they form a unique fingerprint.

Cite this