Real-World Adversarial Example via Makeup

Chang-Sheng Lin, Chia Yi Hsu, Pin-Yu Chen, Chia-Mu Yu

研究成果: Paper同行評審

7 引文 斯高帕斯(Scopus)


Deep neural networks have developed rapidly and have achieved out-standing performance in several tasks, such as image classification and natural language processing. However, recent studies have indicated that both digital and physical adversarial examples can fool neural networks. Face-recognition systems are used in various applications that involve security threats from physical adversarial examples. Herein, we propose a physical adversarial attack with the use of full-face makeup. The presence of makeup on the human face is a reasonable possibility, which possibly increases the imperceptibility of attacks. In our attack framework, we combine the cycle-adversarial generative network (cycle-GAN) and a victimized classifier. The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16. Our experimental results show that our attack can effectively overcome manual errors in makeup application, such as color and position-related errors. We also demonstrate that the approaches used to train the models can influence physical attacks; the adversarial perturbations crafted from the pre-trained model are affected by the corresponding training data.
出版狀態Published - 5月 2022
事件47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Virtual, Online, Singapore
持續時間: 23 5月 202227 5月 2022


Conference47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
城市Virtual, Online


深入研究「Real-World Adversarial Example via Makeup」主題。共同形成了獨特的指紋。