Contactless facial video recording with deep learning models for the detection of atrial fibrillation

Yu Sun, Yin Yin Yang, Bing Jhang Wu, Po Wei Huang, Shao En Cheng, Bing Fei Wu*, Chun Chang Chen

*此作品的通信作者

研究成果: Article同行評審

摘要

Atrial fibrillation (AF) is often asymptomatic and paroxysmal. Screening and monitoring are needed especially for people at high risk. This study sought to use camera-based remote photoplethysmography (rPPG) with a deep convolutional neural network (DCNN) learning model for AF detection. All participants were classified into groups of AF, normal sinus rhythm (NSR) and other abnormality based on 12-lead ECG. They then underwent facial video recording for 10 min with rPPG signals extracted and segmented into 30-s clips as inputs of the training of DCNN models. Using voting algorithm, the participant would be predicted as AF if > 50% of their rPPG segments were determined as AF rhythm by the model. Of the 453 participants (mean age, 69.3 ± 13.0 years, women, 46%), a total of 7320 segments (1969 AF, 1604 NSR & 3747others) were analyzed by DCNN models. The accuracy rate of rPPG with deep learning model for discriminating AF from NSR and other abnormalities was 90.0% and 97.1% in 30-s and 10-min recording, respectively. This contactless, camera-based rPPG technique with a deep-learning model achieved significantly high accuracy to discriminate AF from non-AF and may enable a feasible way for a large-scale screening or monitoring in the future.

原文English
文章編號281
期刊Scientific reports
12
發行號1
DOIs
出版狀態Published - 12月 2022

指紋

深入研究「Contactless facial video recording with deep learning models for the detection of atrial fibrillation」主題。共同形成了獨特的指紋。

引用此