Efficiently Classifying Lung Sounds through Depthwise Separable CNN Models with Fused STFT and MFCC Features

Shing Yun Jung*, Chia Hung Liao, Yu Sheng Wu, Shyan-Ming Yuan*, Chuen-Tsai Sun

*此作品的通信作者

研究成果: Article同行評審

57 引文 斯高帕斯(Scopus)

摘要

Lung sounds remain vital in clinical diagnosis as they reveal associations with pulmonary pathologies. With COVID-19 spreading across the world, it has become more pressing for medical professionals to better leverage artificial intelligence for faster and more accurate lung auscultation. This research aims to propose a feature engineering process that extracts the dedicated features for the depthwise separable convolution neural network (DS-CNN) to classify lung sounds accurately and efficiently. We extracted a total of three features for the shrunk DS-CNN model: the short-time Fourier-transformed (STFT) feature, the Mel-frequency cepstrum coefficient (MFCC) feature, and the fused features of these two. We observed that while DS-CNN models trained on either the STFT or the MFCC feature achieved an accuracy of 82.27% and 73.02%, respectively, fusing both features led to a higher accuracy of 85.74%. In addition, our method achieved 16 times higher inference speed on an edge device and only 0.45% less accuracy than RespireNet. This finding indicates that the fusion of the STFT and MFCC features and DS-CNN would be a model design for lightweight edge devices to achieve accurate AI-aided detection of lung diseases.
原文English
文章編號732
期刊Diagnostics
11
發行號4
DOIs
出版狀態Published - 20 4月 2021

指紋

深入研究「Efficiently Classifying Lung Sounds through Depthwise Separable CNN Models with Fused STFT and MFCC Features」主題。共同形成了獨特的指紋。

引用此