Abstract
Driver monitoring is a popular topic of investigation. Contactless vital-sign measurement via remote photoplethysmography (rPPG) is an easy and effective alternative to cumbersome wearable devices during driving. However, driving is a noise-heavy situation for remote vital-sign sensing, owing to the disturbances caused by jolted cars or varying illumination from time and weather variations. In this study, we propose deep learning models for rPPG construction and heart rate estimation that work for both red-green-blue (RGB) and monochrome near-infrared (NIR) images. The proposed method overcomes the adaptation issues commonly seen in deep-learning-based heart rate estimation approaches and achieves a significant improvement in robustness for heart rate measurement. To evaluate the performance thoroughly, in addition to a public dataset, we construct four testing driving sets for various practical conditions, including driving at night, on rainy days, and on long-term journeys. It is also worth noting that we conducted the assessment with a cross-dataset protocol to further verify the efficiency of our method. The experimental results on the five datasets show that our proposed method can perform better than the state-of-the-art method for both the RGB and NIR images, especially in some challenging cases. The root-mean-square error (RMSE) for the situation with additional head motion conditions can be improved by 28.6% with RGB images and by 21.91% with NIR images on the open dataset. Moreover, RMSE can be improved by up to 42.6% at night, 14.41% on rainy days, and 18.78% on long-term journeys.
Original language | English |
---|---|
Article number | 5031612 |
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Instrumentation and Measurement |
Volume | 72 |
DOIs | |
State | Published - 2023 |
Keywords
- Driver monitoring
- heart rate (HR)
- near-infrared (NIR) image
- neural network (NN)
- remote photoplethysmography (rPPG)
- robustness