Mobile Virtual Assistant for Multi-Modal Depression-Level Stratification

Eric Hsiao Kuang Wu, Ting Yu Gao, Chia Ru Chung, Chun Chuan Chen, Chia Fen Tsai, Shih Ching Yeh

研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)

摘要

Depression not only afflicts hundreds of millions of people but also contributes to a global disability and healthcare burden. The primary method of diagnosing depression relies on the judgment of medical professionals in clinical interviews with patients, which is subjective and time-consuming. Recent studies have demonstrated that text, audio, facial attributes, heart rate, and eye movement could be utilized for depression-level stratification. In this paper, we construct a virtual assistant for automatic depression-level stratification on mobile devices that can actively guide users through voice dialogue and change conversation content using emotion perception. During the conversation, features from text, audio, facial attributes, heart rate, and eye movement are extracted for multi-modal depression-level stratification. We utilize a feature-level fusion framework to integrate five modalities and the deep neural network to classify the varying levels of depression, which include healthy, mild, moderate, or severe depression, as well as bipolar disorder (formerly called manic depression). With outcome data from 168 subjects, experimental results reveal that the total accuracy of feature-level fusion with five modal features achieves the highest accuracy of 90.26 percent.

原文English
頁(從 - 到)1-14
頁數14
期刊IEEE Transactions on Affective Computing
DOIs
出版狀態Accepted/In press - 2024

指紋

深入研究「Mobile Virtual Assistant for Multi-Modal Depression-Level Stratification」主題。共同形成了獨特的指紋。

引用此