Robust vision-based glove pose estimation for both hands in virtual reality

Fu Song Hsu*, Te Mei Wang, Liang Hsun Chen

*此作品的通信作者

研究成果: Article同行評審

摘要

In virtual reality (VR) applications, haptic gloves provide feedback and more direct control than bare hands do. Most VR gloves contain flex and inertial measurement sensors for tracking the finger joints of a single hand; however, they lack a mechanism for tracking two-hand interactions. In this paper, a vision-based method is proposed for improved two-handed glove tracking. The proposed method requires only one camera attached to a VR headset. A photorealistic glove data generation framework was established to synthesize large quantities of training data for identifying the left, right, or both gloves in images with complex backgrounds. We also incorporated the “glove pose hypothesis” in the training stage, in which spatial cues regarding relative joint positions were exploited for accurately predict glove positions under severe self-occlusion or motion blur. In our experiments, a system based on the proposed method achieved an accuracy of 94.06% on a validation set and achieved high-speed tracking at 65 fps on a consumer graphics processing unit.

原文English
頁(從 - 到)3133-3148
頁數16
期刊Virtual Reality
27
發行號4
DOIs
出版狀態Published - 12月 2023

指紋

深入研究「Robust vision-based glove pose estimation for both hands in virtual reality」主題。共同形成了獨特的指紋。

引用此