Variational dialogue generation with normalizing flows

Tien Ching Luo, Jen Tzung Chien

研究成果: Conference article同行評審

10 引文 斯高帕斯(Scopus)


Conditional variational autoencoder (cVAE) has shown promising performance in dialogue generation. However, there still exists two issues in dialog cVAE model. The first issue is the Kullback-Leiblier (KL) vanishing problem which results in degenerating cVAE into a simple recurrent neural network. The second issue is the assumption of isotropic Gaussian prior for latent variable which is too simple to assure diversity of the generated responses. To handle these issues, a simple distribution should be transformed into a complex distribution and simultaneously the value of KL divergence should be preserved. This paper presents the dialogue flow VAE (DF-VAE) for variational dialogue generation. In particular, KL vanishing is tackled by a new normalizing flow. An inverse autoregressive flow is proposed to transform isotropic Gaussian prior to a rich distribution. In the experiments, the proposed DF-VAE is significantly better than the other methods in terms of different evaluation metrics. The diversity of generated dialogue responses is enhanced. Ablation study is conducted to illustrate the merit of the proposed flow models.

頁(從 - 到)7778-7782
期刊ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
出版狀態Published - 6 6月 2021
事件2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada
持續時間: 6 6月 202111 6月 2021


深入研究「Variational dialogue generation with normalizing flows」主題。共同形成了獨特的指紋。