Abstract
In virtual reality (VR) applications, haptic gloves provide feedback and more direct control than bare hands do. Most VR gloves contain flex and inertial measurement sensors for tracking the finger joints of a single hand; however, they lack a mechanism for tracking two-hand interactions. In this paper, a vision-based method is proposed for improved two-handed glove tracking. The proposed method requires only one camera attached to a VR headset. A photorealistic glove data generation framework was established to synthesize large quantities of training data for identifying the left, right, or both gloves in images with complex backgrounds. We also incorporated the “glove pose hypothesis” in the training stage, in which spatial cues regarding relative joint positions were exploited for accurately predict glove positions under severe self-occlusion or motion blur. In our experiments, a system based on the proposed method achieved an accuracy of 94.06% on a validation set and achieved high-speed tracking at 65 fps on a consumer graphics processing unit.
Original language | English |
---|---|
Pages (from-to) | 3133-3148 |
Number of pages | 16 |
Journal | Virtual Reality |
Volume | 27 |
Issue number | 4 |
DOIs | |
State | Published - Dec 2023 |
Keywords
- Glove dataset
- Glove tracking
- Hand pose estimation
- Hand tracking
- Haptic glove
- Vision-based tracking