Adversarial Data Augmentation Improves Unsupervised Machine Learning

Chia-Yi Hsu, Pin-Yu Chen, Chia-Mu Yu, Songtao Lu, Sijia Liu

研究成果: Paper同行評審

摘要

Adversarial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models. However, current studies focus on supervised learning tasks, relying on the ground-truth data label, a targeted objective, or supervision from a trained classifier. In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation. Our framework exploits a mutual information neural estimator as an information-theoretic similarity measure to generate adversarial examples without supervision. We propose a new MinMax algorithm with provable convergence guarantees for efficient generation of unsupervised adversarial examples. When using unsupervised adversarial examples as a simple plug-in data augmentation tool for model retraining, significant improvements are consistently observed across different unsupervised tasks and datasets, including data reconstruction, representation learning, and contrastive learning.
原文American English
出版狀態Published - 5月 2021
事件The International Conference on Learning Representations (ICLR) 2021 -
持續時間: 4 5月 20217 5月 2021

Workshop

WorkshopThe International Conference on Learning Representations (ICLR) 2021
期間4/05/217/05/21

指紋

深入研究「Adversarial Data Augmentation Improves Unsupervised Machine Learning」主題。共同形成了獨特的指紋。

引用此