Robust vision-based glove pose estimation for both hands in virtual reality

Fu Song Hsu*, Te Mei Wang, Liang Hsun Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In virtual reality (VR) applications, haptic gloves provide feedback and more direct control than bare hands do. Most VR gloves contain flex and inertial measurement sensors for tracking the finger joints of a single hand; however, they lack a mechanism for tracking two-hand interactions. In this paper, a vision-based method is proposed for improved two-handed glove tracking. The proposed method requires only one camera attached to a VR headset. A photorealistic glove data generation framework was established to synthesize large quantities of training data for identifying the left, right, or both gloves in images with complex backgrounds. We also incorporated the “glove pose hypothesis” in the training stage, in which spatial cues regarding relative joint positions were exploited for accurately predict glove positions under severe self-occlusion or motion blur. In our experiments, a system based on the proposed method achieved an accuracy of 94.06% on a validation set and achieved high-speed tracking at 65 fps on a consumer graphics processing unit.

Original languageEnglish
Pages (from-to)3133-3148
Number of pages16
JournalVirtual Reality
Volume27
Issue number4
DOIs
StatePublished - Dec 2023

Keywords

  • Glove dataset
  • Glove tracking
  • Hand pose estimation
  • Hand tracking
  • Haptic glove
  • Vision-based tracking

Fingerprint

Dive into the research topics of 'Robust vision-based glove pose estimation for both hands in virtual reality'. Together they form a unique fingerprint.

Cite this