Abstract
Purpose: The first step in typical treatment of vestibular schwannoma (VS) is to localize the tumor region, which is time-consuming and subjective because it relies on repeatedly reviewing different parametric magnetic resonance (MR) images. A reliable, automatic VS detection method can streamline the process. Methods: A convolutional neural network architecture, namely YOLO-v2 with a residual network as a backbone, was used to detect VS tumors from MR images. To heighten performance, T1-weighted–contrast-enhanced, T2-weighted, and T1-weighted images were combined into triple-channel images for feature learning. The triple-channel images were cropped into three sizes to serve as input images of YOLO-v2. The VS detection effectiveness levels were evaluated for two backbone residual networks that downsampled the inputs by 16 and 32. Results: The results demonstrated the VS detection capability of YOLO-v2 with a residual network as a backbone model. The average precision was 0.7953 for a model with 416 × 416-pixel input images and 16 instances of downsampling, when both the thresholds of confidence score and intersection-over-union were set to 0.5. In addition, under an appropriate threshold of confidence score, a high average precision, namely 0.8171, was attained by using a model with 448 × 448-pixel input images and 16 instances of downsampling. Conclusion: We demonstrated successful VS tumor detection by using a YOLO-v2 with a residual network as a backbone model on resized triple-parametric MR images. The results indicated the influence of image size, downsampling strategy, and confidence score threshold on VS tumor detection.
Original language | English |
---|---|
Pages (from-to) | 626-635 |
Number of pages | 10 |
Journal | Journal of Medical and Biological Engineering |
Volume | 41 |
Issue number | 5 |
DOIs | |
State | Published - Oct 2021 |
Keywords
- Convolutional Neural Network
- Multiparametric MR images
- Tumor Detection
- Vestibular Schwannoma
- YOLO-v2