ROSNet: Robust one-stage network for CT lesion detection

Kuan Yu Lung, Chi Rung Chang, Shao En Weng, Hao Siang Lin, Hong Han Shuai, Wen Huang Cheng*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

Automatic lesion detection from computed tomography (CT) scans is an important task in medical diagnosis. However, three frequent properties of medical data make CT lesion detection a challenging task: (1) Scale variance: Large scale variation is across lesion instances. Especially, it is extremely difficult to detect small lesions; (2) Imbalanced data: The data distributions are highly imbalanced, where few classes account for the majority of data; (3) Prediction stability: Based on our observations, an input lesion image with slightly pixel shift or translation can lead to drastic output mispredictions and this is not allowed for medical applications. To address these challenges, this paper proposes a Robust One-Stage Network (ROSNet) for robust CT lesion detection. Specifically, a novel nested structure of neural networks is developed to generate a series of feature pyramids for detecting CT lesions in various scales, an effective data sensitive class-balanced loss as well as a shift-invariant downsampling strategy are also introduced to improve the detection performance. Experiments are conducted on a large-scale and diverse dataset, DeepLesion, showing that ROSNet outperforms the best performance in MICCAI 2019 by 3.95% (2-class detection task) and 25.41% (8-class detection task) in terms of mean average precision (mAP).

Original languageAmerican English
Pages (from-to)82-88
Number of pages7
JournalPattern Recognition Letters
Volume144
DOIs
StatePublished - Apr 2021

Keywords

  • Class-balanced loss
  • Computed tomography scan
  • Deep learning
  • Lesion detection
  • Multi-level feature pyramid

Fingerprint

Dive into the research topics of 'ROSNet: Robust one-stage network for CT lesion detection'. Together they form a unique fingerprint.

Cite this