Bayesian sparse topic model

Jen-Tzung Chien*, Ying Lan Chang

*此作品的通信作者

研究成果: Article同行評審

20 引文 斯高帕斯(Scopus)

摘要

This paper presents a new Bayesian sparse learning approach to select salient lexical features for sparse topic modeling. The Bayesian learning based on latent Dirichlet allocation (LDA) is performed by incorporating the spike-and-slab priors. According to this sparse LDA (sLDA), the spike distribution is used to select salient words while the slab distribution is applied to establish the latent topic model based on those selected relevant words. The variational inference procedure is developed to estimate prior parameters for sLDA. In the experiments on document modeling using LDA and sLDA, we find that the proposed sLDA does not only reduce the model perplexity but also reduce the memory and computation costs. Bayesian feature selection method does effectively identify relevant topic words for building sparse topic model.

原文English
頁(從 - 到)375-389
頁數15
期刊Journal of Signal Processing Systems
74
發行號3
DOIs
出版狀態Published - 1 一月 2014

指紋

深入研究「Bayesian sparse topic model」主題。共同形成了獨特的指紋。

引用此