Markerless 3D hand posture estimation from monocular video by two-level searching

Iek Kuong Pun, I-Chen Lin, Tsung Hsien Tang

    Research output: Contribution to conferencePaperpeer-review

    Abstract

    In this paper, a marker less 3D hand tracking system for monocular RGB video is presented. We propose a novel two-level approach to efficiently grasp the personal characteristics and high varieties of hand postures. Our system first searches the approximate nearest neighbors in a small personalized real-hand image set, and retrieves more details from a large synthetic 3D hand posture database. Temporal consistency property is also utilized for disambiguating and noise reduction. Our prototype system can approximate hand poses including rigid and non-rigid out-of-image-plane rotation, slow and fast gesture changing during rotation. It can also recover from a short-Term missing hand situation in an interactive rate.

    Original languageEnglish
    Pages204-211
    Number of pages8
    DOIs
    StatePublished - 1 Jan 2013
    Event13th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2013 - Hong Kong, China
    Duration: 16 Nov 201318 Nov 2013

    Conference

    Conference13th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2013
    Country/TerritoryChina
    CityHong Kong
    Period16/11/1318/11/13

    Keywords

    • 3D gesture
    • advanced interface
    • hand tracking

    Fingerprint

    Dive into the research topics of 'Markerless 3D hand posture estimation from monocular video by two-level searching'. Together they form a unique fingerprint.

    Cite this