Adversarial Data Augmentation Improves Unsupervised Machine Learning

Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu, Songtao Lu, Sijia Liu

Research output: Contribution to conferencePaperpeer-review


Adversarial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models. However, current studies focus on supervised learning tasks, relying on the ground-truth data label, a targeted objective, or supervision from a trained classifier. In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation. Our framework exploits a mutual information neural estimator as an information-theoretic similarity measure to generate adversarial examples without supervision. We propose a new MinMax algorithm with provable convergence guarantees for efficient generation of unsupervised adversarial examples. When using unsupervised adversarial examples as a simple plug-in data augmentation tool for model retraining, significant improvements are consistently observed across different unsupervised tasks and datasets, including data reconstruction, representation learning, and contrastive learning.
Original languageAmerican English
StatePublished - May 2021
EventThe International Conference on Learning Representations (ICLR) 2021 -
Duration: 4 May 20217 May 2021


WorkshopThe International Conference on Learning Representations (ICLR) 2021


Dive into the research topics of 'Adversarial Data Augmentation Improves Unsupervised Machine Learning'. Together they form a unique fingerprint.

Cite this