Medical images under tampering

Min Jen Tsai*, Ping Ying Lin

*此作品的通信作者

研究成果: Article同行評審

摘要

Attacks on deep learning models are a constant threat in the world today. As more deep learning models and artificial intelligence (AI) are being implemented across different industries, the likelihood of them being attacked increases dramatically. In this context, the medical domain is of the greatest concern because an erroneous decision made by AI could have a catastrophic outcome and even lead to death. Therefore, a systematic procedure is built in this study to determine how well these medical images can resist a specific adversarial attack, i.e. a one-pixel attack. This may not be the strongest attack, but it is simple and effective, and it could occur by accident or an equipment malfunction. The results of the experiment show that it is difficult for medical images to survive a one-pixel attack.

原文English
期刊Multimedia Tools and Applications
DOIs
出版狀態Accepted/In press - 2024

指紋

深入研究「Medical images under tampering」主題。共同形成了獨特的指紋。

引用此