Learning priors for adversarial autoencoders

Hui Po Wang, Wen-Hsiao Peng*, Wei Jan Ko

*此作品的通信作者

研究成果: Article同行評審

6 引文 斯高帕斯(Scopus)

摘要

Most deep latent factor models choose simple priors for simplicity, tractability, or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.

原文English
文章編號e4
期刊APSIPA Transactions on Signal and Information Processing
9
DOIs
出版狀態Published - 20 1月 2020

指紋

深入研究「Learning priors for adversarial autoencoders」主題。共同形成了獨特的指紋。

引用此