TY - JOUR
T1 - Deep Transfer Learning for the Multilabel Classification of Chest X-ray Images
AU - Huang, Guan Hua
AU - Fu, Qi Jia
AU - Gu, Ming Zhang
AU - Lu, Nan Han
AU - Liu, Kuo Ying
AU - Chen, Tai Been
N1 - Publisher Copyright:
© 2022 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2022/6
Y1 - 2022/6
N2 - Chest X-ray (CXR) is widely used to diagnose conditions affecting the chest, its contents, and its nearby structures. In this study, we used a private data set containing 1630 CXR images with disease labels; most of the images were disease‐free, but the others contained multiple sites of abnormalities. Here, we used deep convolutional neural network (CNN) models to extract feature representations and to identify possible diseases in these images. We also used transfer learning combined with large open‐source image data sets to resolve the problems of insufficient training data and optimize the classification model. The effects of different approaches of reusing pretrained weights (model finetuning and layer transfer), source data sets of different sizes and similarity levels to the target data (ImageNet, ChestX‐ray, and CheXpert), methods integrating source data sets into transfer learning (initiating, concatenating, and co‐training), and backbone CNN models (ResNet50 and DenseNet121) on transfer learning were also assessed. The results demonstrated that transfer learning applied with the model finetuning approach typically afforded better prediction models. When only one source data set was adopted, ChestX‐ray performed better than CheXpert; however, after ImageNet initials were attached, CheXpert performed better. ResNet50 performed better in initiating transfer learning, whereas DenseNet121 performed better in concatenating and co‐training transfer learning. Transfer learning with multiple source data sets was preferable to that with a source data set. Overall, transfer learning can further enhance prediction capabilities and reduce computing costs for CXR images.
AB - Chest X-ray (CXR) is widely used to diagnose conditions affecting the chest, its contents, and its nearby structures. In this study, we used a private data set containing 1630 CXR images with disease labels; most of the images were disease‐free, but the others contained multiple sites of abnormalities. Here, we used deep convolutional neural network (CNN) models to extract feature representations and to identify possible diseases in these images. We also used transfer learning combined with large open‐source image data sets to resolve the problems of insufficient training data and optimize the classification model. The effects of different approaches of reusing pretrained weights (model finetuning and layer transfer), source data sets of different sizes and similarity levels to the target data (ImageNet, ChestX‐ray, and CheXpert), methods integrating source data sets into transfer learning (initiating, concatenating, and co‐training), and backbone CNN models (ResNet50 and DenseNet121) on transfer learning were also assessed. The results demonstrated that transfer learning applied with the model finetuning approach typically afforded better prediction models. When only one source data set was adopted, ChestX‐ray performed better than CheXpert; however, after ImageNet initials were attached, CheXpert performed better. ResNet50 performed better in initiating transfer learning, whereas DenseNet121 performed better in concatenating and co‐training transfer learning. Transfer learning with multiple source data sets was preferable to that with a source data set. Overall, transfer learning can further enhance prediction capabilities and reduce computing costs for CXR images.
KW - convolutional neural network
KW - deep learning
KW - source data set
KW - supervised classification
UR - http://www.scopus.com/inward/record.url?scp=85132412401&partnerID=8YFLogxK
U2 - 10.3390/diagnostics12061457
DO - 10.3390/diagnostics12061457
M3 - Article
AN - SCOPUS:85132412401
SN - 2075-4418
VL - 12
JO - Diagnostics
JF - Diagnostics
IS - 6
M1 - 1457
ER -