Hierarchical and Self-Attended Sequence Autoencoder

Jen Tzung Chien, Chun Wei Wang

研究成果: Article同行評審

17 引文 斯高帕斯(Scopus)

摘要

It is important and challenging to infer stochastic latent semantics for natural language applications. The difficulty in stochastic sequential learning is caused by the posterior collapse in variational inference. The input sequence is disregarded in the estimated latent variables. This paper proposes three components to tackle this difficulty and build the variational sequence autoencoder (VSAE) where sufficient latent information is learned for sophisticated sequence representation. First, the complementary encoders based on a long short-term memory (LSTM) and a pyramid bidirectional LSTM are merged to characterize global and structural dependencies of an input sequence, respectively. Second, a stochastic self attention mechanism is incorporated in a recurrent decoder. The latent information is attended to encourage the interaction between inference and generation in an encoder-decoder training procedure. Third, an autoregressive Gaussian prior of latent variable is used to preserve the information bound. Different variants of VSAE are proposed to mitigate the posterior collapse in sequence modeling. A series of experiments are conducted to demonstrate that the proposed individual and hybrid sequence autoencoders substantially improve the performance for variational sequential learning in language modeling and semantic understanding for document classification and summarization.

指紋

深入研究「Hierarchical and Self-Attended Sequence Autoencoder」主題。共同形成了獨特的指紋。

引用此