Multitask Learning for Automated Sleep Staging and Wearable Technology Integration

Hao Yi Chih, Tanveer Ahmed, Amy P. Chiu, Yu Ting Liu, Hsin Fu Kuo, Albert C. Yang, Der Hsien Lien*


研究成果: Article同行評審


Scoring sleep stages is an essential procedure for the diagnosis of sleeping disorders. Conventional sleep staging is a laborious and costly procedure requiring multimodal biological signals and an expert for the assessment. There has always been a demand for approaches which can exempt the need to going through diagnostic procedures under specialized facilities and enable automated sleep staging. Herein, a high-performance multitask learning model enabling high-accuracy sleep staging using heart rate data is reported. The proposed algorithm exhibits superior performance with reduced computational resource in comparison with the competing machine and deep learning algorithms when trained and evaluated using electrocardiography and photoplethysmogram (PPG) data. The reported model consumes ≈7.5 times less training parameters and ≈75% less amount of input data than the previously reported models and yields better or comparable performance (mean per night accuracy of 77.5% and Cohen's kappa of 0.643). To demonstrate its potential for wearable electronics, the reported algorithm is implemented in a fully integrated watch. The reported integrated watch is a stand-alone fully functional platform, which automatedly captures PPG data from the subject's wrist, predicts sleep stages, and displays the result on a screen as well as an associated smartphone application.

期刊Advanced Intelligent Systems
出版狀態Published - 1月 2024


深入研究「Multitask Learning for Automated Sleep Staging and Wearable Technology Integration」主題。共同形成了獨特的指紋。