Autosegmentation of prostate zones and cancer regions from biparametric magnetic resonance images by using deep-learning-based neural networks

Chih Ching Lai, Hsin Kai Wang, Fu Nien Wang, Yu Ching Peng, Tzu Ping Lin, Hsu Hsia Peng*, Shu Huei Shen*

*此作品的通信作者

研究成果: Article同行評審

17 引文 斯高帕斯(Scopus)

摘要

The accuracy in diagnosing prostate cancer (PCa) has increased with the development of multiparametric magnetic resonance imaging (mpMRI). Biparametric magnetic resonance imaging (bpMRI) was found to have a diagnostic accuracy comparable to mpMRI in detecting PCa. How-ever, prostate MRI assessment relies on human experts and specialized training with considerable inter-reader variability. Deep learning may be a more robust approach for prostate MRI assessment. Here we present a method for autosegmenting the prostate zone and cancer region by using SegNet, a deep convolution neural network (DCNN) model. We used PROSTATEx dataset to train the model and combined different sequences into three channels of a single image. For each subject, all slices that contained the transition zone (TZ), peripheral zone (PZ), and PCa region were selected. The datasets were produced using different combinations of images, including T2-weighted (T2W) images, diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) images. Among these groups, the T2W + DWI + ADC images exhibited the best performance with a dice similarity coefficient of 90.45% for the TZ, 70.04% for the PZ, and 52.73% for the PCa region. Image sequence analysis with a DCNN model has the potential to assist PCa diagnosis.

原文English
文章編號2709
期刊Sensors
21
發行號8
DOIs
出版狀態Published - 2 4月 2021

指紋

深入研究「Autosegmentation of prostate zones and cancer regions from biparametric magnetic resonance images by using deep-learning-based neural networks」主題。共同形成了獨特的指紋。

引用此