Improving the kinect by cross-modal stereo

Wei-Chen Chiu, Ulf Blanke, Mario Fritz

    Research output: Contribution to conferencePaperpeer-review

    51 Scopus citations


    The introduction of the Microsoft Kinect Sensors has stirred significant interest in the robotics community. While originally developed as a gaming interface, a high quality depth sensor and affordable price have made it a popular choice for robotic perception. Its active sensing strategy is very well suited to produce robust and high-frame rate depth maps for human pose estimation. But the shift to the robotics domain surfaced applications under a wider set of operation condition it wasn't originally designed for. We see the sensor fail completely on transparent and specular surfaces which are very common to every day household objects. As these items are of great interest in home robotics and assistive technologies, we have investigated methods to reduce and sometimes even eliminate these effects without any modification of the hardware. In particular, we complement the depth estimate within the Kinect by a cross-modal stereo path that we obtain from disparity matching between the included IR and RGB sensor of the Kinect. We investigate how the RGB channels can be combined optimally in order to mimic the image response of the IR sensor by an early fusion scheme of weighted channels as well as a late fusion scheme that computes stereo matches between the different channels independently. We show a strong improvement in the reliability of the depth estimate as well as improved performance on a object segmentation task in a table top scenario.

    Original languageEnglish
    StatePublished - 1 Jan 2011
    Event2011 22nd British Machine Vision Conference, BMVC 2011 - Dundee, United Kingdom
    Duration: 29 Aug 20112 Sep 2011


    Conference2011 22nd British Machine Vision Conference, BMVC 2011
    Country/TerritoryUnited Kingdom


    Dive into the research topics of 'Improving the kinect by cross-modal stereo'. Together they form a unique fingerprint.

    Cite this