Adversarial Attacks on Medical Image Classification

Min Jen Tsai*, Ping Yi Lin, Ming En Lee

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Due to the growing number of medical images being produced by diverse radiological imaging techniques, radiography examinations with computer-aided diagnoses could greatly assist clinical applications. However, an imaging facility with just a one-pixel inaccuracy will lead to the inaccurate prediction of medical images. Misclassification may lead to the wrong clinical decision. This scenario is similar to the adversarial attacks on deep learning models. Therefore, one-pixel and multi-pixel level attacks on a Deep Neural Network (DNN) model trained on various medical image datasets are investigated in this study. Common multiclass and multi-label datasets are examined for one-pixel type attacks. Moreover, different experiments are conducted in order to determine how changing the number of pixels in the image may affect the classification performance and robustness of diverse DNN models. The experimental results show that it was difficult for the medical images to survive the pixel attacks, raising the issue of the accuracy of medical image classification and the importance of the model’s ability to resist these attacks for a computer-aided diagnosis.

Original languageEnglish
Article number4228
JournalCancers
Volume15
Issue number17
DOIs
StatePublished - Sep 2023

Keywords

  • adversarial learning
  • artificial intelligence
  • computer vision
  • machine learning
  • metaheuristic

Fingerprint

Dive into the research topics of 'Adversarial Attacks on Medical Image Classification'. Together they form a unique fingerprint.

Cite this