Medical images under tampering

Min Jen Tsai*, Ping Ying Lin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Attacks on deep learning models are a constant threat in the world today. As more deep learning models and artificial intelligence (AI) are being implemented across different industries, the likelihood of them being attacked increases dramatically. In this context, the medical domain is of the greatest concern because an erroneous decision made by AI could have a catastrophic outcome and even lead to death. Therefore, a systematic procedure is built in this study to determine how well these medical images can resist a specific adversarial attack, i.e. a one-pixel attack. This may not be the strongest attack, but it is simple and effective, and it could occur by accident or an equipment malfunction. The results of the experiment show that it is difficult for medical images to survive a one-pixel attack.

Original languageEnglish
Pages (from-to)65407-65439
Number of pages33
JournalMultimedia Tools and Applications
Volume83
Issue number24
DOIs
StatePublished - Jul 2024

Keywords

  • Adversarial attacks
  • Deep learning
  • Differential evolution
  • Medical image analysis
  • One-pixel attack

Fingerprint

Dive into the research topics of 'Medical images under tampering'. Together they form a unique fingerprint.

Cite this