A comparative study of data fusion for RGB-D based visual recognition

Jordi Sanchez-Riera, Kai Lung Hua, Yuan Sheng Hsiao, Tekoing Lim, Shintami C. Hidayati, Wen-Huang Cheng*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

58 Scopus citations

Abstract

Data fusion from different modalities has been extensively studied for a better understanding of multimedia contents. On one hand, the emergence of new devices and decreasing storage costs cause growing amounts of data being collected. Though bigger data makes it easier to mine information, methods for big data analytics are not well investigated. On the other hand, new machine learning techniques, such as deep learning, have been shown to be one of the key elements in achieving state-of-the-art inference performances in a variety of applications. Therefore, some of the old questions in data fusion are in need to be addressed again for these new changes. These questions are: What is the most effective way to combine data for various modalities? Does the fusion method affect the performance with different classifiers? To answer these questions, in this paper, we present a comparative study for evaluating early and late fusion schemes with several types of SVM and deep learning classifiers on two challenging RGB-D based visual recognition tasks: hand gesture recognition and generic object recognition. The findings from this study provide useful policy and practical guidance for the development of visual recognition systems.

Original languageEnglish
Pages (from-to)1-6
Number of pages6
JournalPattern Recognition Letters
Volume73
DOIs
StatePublished - 1 Apr 2016

Keywords

  • CNN
  • DBN
  • Fusion
  • RGB-D
  • SAE
  • SVM

Fingerprint

Dive into the research topics of 'A comparative study of data fusion for RGB-D based visual recognition'. Together they form a unique fingerprint.

Cite this