Video object inpainting using posture mapping

Chih Hung Ling*, Chia Wen Lin, Chih Wen Su, Hong Yuan Mark Liao, Yong-Sheng Chen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations


This paper presents a novel framework for object-based video inpainting. To complete an occluded object, our method first samples a 3-D volume of the video into directional spatio-temporal slices, and then performs patch-based image inpainting to repair the partially damaged object trajectories in the 2-D slices. The completed slices are subsequently combined to obtain a sequence of virtual contours of the damaged object. The virtual contours and a posture sequence retrieval technique are then used to retrieve the most similar sequence of object postures in the available non-occluded postures. Key-posture selection and indexing are performed to reduce the complexity of posture sequence retrieval. We also propose a synthetic posture generation scheme that enriches the collection of key-postures so as to reduce the effect of insufficient key-postures. Our experimental results demonstrate that the proposed method can maintain the spatial consistency and temporal motion continuity of an object simultaneously.

Original languageEnglish
Title of host publication2009 IEEE International Conference on Image Processing, ICIP 2009 - Proceedings
PublisherIEEE Computer Society
Number of pages4
ISBN (Print)9781424456543
StatePublished - 1 Jan 2009
Event2009 IEEE International Conference on Image Processing, ICIP 2009 - Cairo, Egypt
Duration: 7 Nov 200910 Nov 2009

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880


Conference2009 IEEE International Conference on Image Processing, ICIP 2009


  • Object completion
  • Posture mapping
  • Synthetic posture
  • Video inpainitng


Dive into the research topics of 'Video object inpainting using posture mapping'. Together they form a unique fingerprint.

Cite this