Latent Dirichlet allocation (LDA) has been successfully for document modeling and classification. LDA the document probability based on bag-of-words without considering the sequence of words. This discovers the topic structure at document level, is different from the concern of word prediction in recognition. In this paper, we present a new latent language model (LDLM) for modeling of word . A new Bayesian framework is introduced by the Dirichlet priors to characterize the uncertainty latent topics of n-gram events. The robust topic-based model is established accordingly. In the , we implement LDLM for continuous speech and obtain better performance than probabilistic semantic analysis (PLSA) based language method.