摘要
It is important and challenging to infer stochastic latent semantics for natural language applications. The difficulty in stochastic sequential learning is caused by the posterior collapse in variational inference. The input sequence is disregarded in the estimated latent variables. This paper proposes three components to tackle this difficulty and build the variational sequence autoencoder (VSAE) where sufficient latent information is learned for sophisticated sequence representation. First, the complementary encoders based on a long short-term memory (LSTM) and a pyramid bidirectional LSTM are merged to characterize global and structural dependencies of an input sequence, respectively. Second, a stochastic self attention mechanism is incorporated in a recurrent decoder. The latent information is attended to encourage the interaction between inference and generation in an encoder-decoder training procedure. Third, an autoregressive Gaussian prior of latent variable is used to preserve the information bound. Different variants of VSAE are proposed to mitigate the posterior collapse in sequence modeling. A series of experiments are conducted to demonstrate that the proposed individual and hybrid sequence autoencoders substantially improve the performance for variational sequential learning in language modeling and semantic understanding for document classification and summarization.
原文 | English |
---|---|
頁數 | 12 |
期刊 | IEEE Transactions on Pattern Analysis and Machine Intelligence |
DOIs | |
出版狀態 | Published - 3月 2021 |