Adaptive motion data representation with repeated motion analysis

I-Chen Lin*, Jen Yu Peng, Chao Chih Lin, Ming Han Tsai

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

23 Scopus citations


In this paper, we present a representation method for motion capture data by exploiting the nearly repeated characteristics and spatiotemporal coherence in human motion. We extract similar motion clips of variable lengths or speeds across the database. Since the coding costs between these matched clips are small, we propose the repeated motion analysis to extract the referred and repeated clip pairs with maximum compression gains. For further utilization of motion coherence, we approximate the subspace-projected clip motions or residuals by interpolated functions with range-aware adaptive quantization. Our experiments demonstrate that the proposed feature-aware method is of high computational efficiency. Furthermore, it also provides substantial compression gains with comparable reconstruction and perceptual errors.

Original languageEnglish
Article number5660069
Pages (from-to)527-538
Number of pages12
JournalIEEE Transactions on Visualization and Computer Graphics
Issue number4
StatePublished - 1 Jan 2011


  • Compression (coding)-approximate methods
  • Three-dimensional graphics and realism-animation


Dive into the research topics of 'Adaptive motion data representation with repeated motion analysis'. Together they form a unique fingerprint.

Cite this