Adversarial attacks on multi-focus image fusion models

Xin Jin*, Ruxin Wang, Shin Jye Lee, Qian Jiang, Shaowen Yao, Wei Zhou

*此作品的通信作者

研究成果: Article同行評審

2 引文 斯高帕斯(Scopus)

摘要

Multi-focus image fusion aims to create an all-in-focus clear image by fusing a set of partially focused images. In recent years, various multi-focus image fusion methods based on deep learning have been proposed, but there is no thorough study evaluating their robustness for adversarial attacks. In this paper, we investigate the robustness of deep learning based multi-focus image fusion models to adversarial attacks. First, we generated adversarial examples which can significantly reduce the fusion quality of image fusion models. Then, a metric defocus attack intensity (DAI) was proposed to quantitatively evaluate the robustness of different models for adversarial attacks. At last, we analyzed the factors affecting model robustness, including model size and post-processing steps. Besides, we successfully attacked recent image fusion models in the black-box scene by utilizing the transferability of adversarial examples. Experimental results show that state-of-the-art image fusion models are also vulnerable to adversarial attacks, and some observations in image classifier robustness studies are not transferable to image fusion task.

原文English
文章編號103455
期刊Computers and Security
134
DOIs
出版狀態Published - 11月 2023

指紋

深入研究「Adversarial attacks on multi-focus image fusion models」主題。共同形成了獨特的指紋。

引用此