Enhancing music recognition using deep learning-powered source separation technology for cochlear implant users

Yuh Jer Chang, Ji Yan Han, Wei Chung Chu, Lieber Po Hung Li, Ying Hui Lai*

*此作品的通信作者

研究成果: Article同行評審

摘要

Cochlear implant (CI) is currently the vital technological device for assisting deaf patients in hearing sounds and greatly enhances their sound listening appreciation. Unfortunately, it performs poorly for music listening because of the insufficient number of electrodes and inaccurate identification of music features. Therefore, this study applied source separation technology with a self-adjustment function to enhance the music listening benefits for CI users. In the objective analysis method, this study showed that the results of the source-to-distortion, source-to-interference, and source-to-artifact ratios were 4.88, 5.92, and 15.28 dB, respectively, and significantly better than the Demucs baseline model. For the subjective analysis method, it scored higher than the traditional baseline method VIR6 (vocal to instrument ratio, 6 dB) by approximately 28.1 and 26.4 (out of 100) in the multi-stimulus test with hidden reference and anchor test, respectively. The experimental results showed that the proposed method can benefit CI users in identifying music in a live concert, and the personal self-fitting signal separation method had better results than any other default baselines (vocal to instrument ratio of 6 dB or vocal to instrument ratio of 0 dB) did. This finding suggests that the proposed system is a potential method for enhancing the music listening benefits for CI users.

原文English
頁(從 - 到)1694-1703
頁數10
期刊Journal of the Acoustical Society of America
155
發行號3
DOIs
出版狀態Published - 1 3月 2024

指紋

深入研究「Enhancing music recognition using deep learning-powered source separation technology for cochlear implant users」主題。共同形成了獨特的指紋。

引用此