Action recognition with the augmented MoCap data using neural data translation

Shih Yao Lin, Yen-Yu Lin

Research output: Contribution to conferencePaperpeer-review

Abstract

This study aims at generating reliable augmented training data to learn a robust deep model for action recognition. The prior knowledge inferred from few training data is not sufficient to well represent the real data distribution, which makes action recognition quite challenging. Inspired by the recent advances in neural machine translation, we propose a neural data translation (NDT) to tackle the aforementioned issue by directly learning the mapping between paired data of the same action class in an end-to-end fashion. The proposed NDT is a sequence-to-sequence generative model. It can be trained with few paired training data, and generates an abundant set of augmented actions with diverse appearance. Specifically, we adopt stochastic pair selection to compile a set of paired training data. Each pair consists of two actions of the same class. One action serves as the input to NDT, while the other acts as the desired output. By learning the mapping between data of the same class, NDT implicitly encodes the intra-class variations so that it can synthesize high-quality actions for augmentation. We evaluated our method on two public datasets, including the Florence3D-action and UCI HAR datasets. The promising results demonstrate that the actions generated by our method effectively improve the performance of action recognition with few examples.

Original languageEnglish
StatePublished - 3 Sep 2018
Event29th British Machine Vision Conference, BMVC 2018 - Newcastle, United Kingdom
Duration: 3 Sep 20186 Sep 2018

Conference

Conference29th British Machine Vision Conference, BMVC 2018
Country/TerritoryUnited Kingdom
CityNewcastle
Period3/09/186/09/18

Fingerprint

Dive into the research topics of 'Action recognition with the augmented MoCap data using neural data translation'. Together they form a unique fingerprint.

Cite this