Self attention in variational sequential learning for summarization

Jen Tzung Chien, Chun Wei Wang

研究成果: Conference article同行評審

19 引文 斯高帕斯(Scopus)

摘要

Attention mechanism plays a crucial role in sequential learning for many speech and language applications. However, it is challenging to develop a stochastic attention in a sequence-to-sequence model which consists of two recurrent neural networks (RNNs) as the encoder and decoder. The problem of posterior collapse happens in variational inference and results in the estimated latent variables close to a standard Gaussian prior so that the information from input sequence is disregarded in learning process. This paper presents a new recurrent autoencoder for sentence representation where a self attention scheme is incorporated to activate the interaction between inference and generation in training procedure. In particular, a stochastic RNN decoder is implemented to provide additional latent variable to fulfill self attention for sentence reconstruction. The posterior collapse is alleviated. The latent information is sufficiently attended in variational sequential learning. During test phase, the estimated prior distribution of decoder is sampled for stochastic attention and generation. Experiments on Penn Tree-bank and Yelp 2013 show the desirable generation performance in terms of perplexity. The visualization of attention weights also illustrates the usefulness of self attention. The evaluation on DUC 2007 demonstrates the merit of variational recurrent autoencoder for document summarization.

指紋

深入研究「Self attention in variational sequential learning for summarization」主題。共同形成了獨特的指紋。

引用此