Learning priors for adversarial autoencoders

Hui Po Wang, Wen-Hsiao Peng*, Wei Jan Ko

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Most deep latent factor models choose simple priors for simplicity, tractability, or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.

Original languageEnglish
Article numbere4
JournalAPSIPA Transactions on Signal and Information Processing
Volume9
DOIs
StatePublished - 20 Jan 2020

Keywords

  • Adversarial autoencoders
  • Deep learning
  • Latent factor models
  • Learned priors

Fingerprint

Dive into the research topics of 'Learning priors for adversarial autoencoders'. Together they form a unique fingerprint.

Cite this