Natural Light Can Also be Dangerous: Traffic Sign Misinterpretation under Adversarial Natural Light Attacks

Teng Fang Hsiao*, Bo Lun Huang, Zi Xiang Ni, Yan Ting Lin, Hong Han Shuai, Yung Hui Li, Wen Huang Cheng

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Common illumination sources like sunlight or artificial light may introduce hidden vulnerabilities to AI systems. Our paper delves into these potential threats, offering a novel approach to simulate varying light conditions, including sunlight, headlights, and flashlight illuminations. Moreover, unlike typical physical adversarial attacks requiring conspicuous alterations, our method utilizes a model-agnostic black-box attack integrated with the Zeroth Order Optimization (ZOO) algorithm to identify deceptive patterns in a physically-applicable space. Consequently, attackers can recreate these simulated conditions, deceiving machine learning models with seemingly natural light. Empirical results demonstrate the efficacy of our method, misleading models trained on the GTSRB and LISA datasets under natural-like physical environments with an attack success rate exceeding 70% across all digital datasets, and remaining effective against all evaluated real-world traffic signs. Importantly, after adversarial training using samples generated from our approach, models showcase enhanced robustness, underscoring the dual value of our work in both identifying and mitigating potential threats. https://github.com/BlueDyee/natural-light-attack.

Original languageEnglish
Title of host publicationProceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3903-3912
Number of pages10
ISBN (Electronic)9798350318920
DOIs
StatePublished - 3 Jan 2024
Event2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024 - Waikoloa, United States
Duration: 4 Jan 20248 Jan 2024

Publication series

NameProceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024

Conference

Conference2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
Country/TerritoryUnited States
CityWaikoloa
Period4/01/248/01/24

Keywords

  • Adversarial learning
  • Algorithms
  • Algorithms
  • Image recognition and understanding
  • adversarial attack and defense methods

Fingerprint

Dive into the research topics of 'Natural Light Can Also be Dangerous: Traffic Sign Misinterpretation under Adversarial Natural Light Attacks'. Together they form a unique fingerprint.

Cite this