Imperceptible adversarial attack via spectral sensitivity of human visual system

Chen Kuo Chiang*, Ying Dar Lin, Ren Hung Hwang, Po Ching Lin, Shih Ya Chang, Hao Ting Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Adversarial attacks reveals that deep neural networks are vulnerable to adversarial examples. Intuitively, adversarial examples with more perturbations result in a strong attack, leading to a lower recognition accuracy. However, increasing perturbations also causes visually noticeable changes in the images. In order to address the problem on how to improve the attack strength while maintaining the visual perception quality, an imperceptible adversarial attack via spectral sensitivity of the human visual system is proposed. Based on the analysis of human visual system, the proposed method allows more perturbations as attack information and re-distributes perturbations into pixels where the changes are imperceptible to human eyes. Therefore, it presents better Accuracy under Attack(AuA) than existing attack methods whereas the image quality can be maintained to the similar level as other methods. Experimental results demonstrate that our method improves the attack strength of existing adversarial attack methods by adding 3% to 23% while mostly maintaining the visual quality of SSIM lower than 0.05.

Original languageEnglish
Pages (from-to)59291-59315
Number of pages25
JournalMultimedia Tools and Applications
Volume83
Issue number20
DOIs
StatePublished - Jun 2024

Keywords

  • Deep learning
  • Human visual system
  • Imperceptible adversarial attack
  • Spectral sensitivity

Fingerprint

Dive into the research topics of 'Imperceptible adversarial attack via spectral sensitivity of human visual system'. Together they form a unique fingerprint.

Cite this