Multitask Learning for Automated Sleep Staging and Wearable Technology Integration

Hao Yi Chih, Tanveer Ahmed, Amy P. Chiu, Yu Ting Liu, Hsin Fu Kuo, Albert C. Yang, Der Hsien Lien*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Scoring sleep stages is an essential procedure for the diagnosis of sleeping disorders. Conventional sleep staging is a laborious and costly procedure requiring multimodal biological signals and an expert for the assessment. There has always been a demand for approaches which can exempt the need to going through diagnostic procedures under specialized facilities and enable automated sleep staging. Herein, a high-performance multitask learning model enabling high-accuracy sleep staging using heart rate data is reported. The proposed algorithm exhibits superior performance with reduced computational resource in comparison with the competing machine and deep learning algorithms when trained and evaluated using electrocardiography and photoplethysmogram (PPG) data. The reported model consumes ≈7.5 times less training parameters and ≈75% less amount of input data than the previously reported models and yields better or comparable performance (mean per night accuracy of 77.5% and Cohen's kappa of 0.643). To demonstrate its potential for wearable electronics, the reported algorithm is implemented in a fully integrated watch. The reported integrated watch is a stand-alone fully functional platform, which automatedly captures PPG data from the subject's wrist, predicts sleep stages, and displays the result on a screen as well as an associated smartphone application.

Original languageEnglish
JournalAdvanced Intelligent Systems
StateAccepted/In press - 2023


  • machine learning
  • multitask learning
  • sleep staging
  • wearable technologies


Dive into the research topics of 'Multitask Learning for Automated Sleep Staging and Wearable Technology Integration'. Together they form a unique fingerprint.

Cite this