Motion-Aware Temporal Coherence for Video Resizing

Yu-Shuen Wang, Tong Yee Lee, Hongbo Fu, Olga Sorkine, Hans Peter Seidel

研究成果: Article同行評審

4 引文 斯高帕斯(Scopus)

摘要

Temporal coherence is crucial in content-aware video retargeting. To date, this problem has been addressed by constraining temporally adjacent pixels to be transformed coherently. However, due to the motion-oblivious nature of this simple constraint, the retargeted videos often exhibit flickering or waving artifacts, especially when significant camera or object motions are involved. Since the feature correspondence across frames varies spatially with both camera and object motion, motion-aware treatment of features is required for video resizing. This motivated us to align consecutive frames by estimating interframe camera motion and to constrain relative positions in the aligned frames. To preserve object motion, we detect distinct moving areas of objects across multiple frames and constrain each of them to be resized consistently. We build a complete video resizing framework by incorporating our motion-aware constraints with an adaptation of the scale-and-stretch optimization recently proposed by Wang and colleagues. Our streaming implementation of the framework allows efficient resizing of long video sequences with low memory cost. Experiments demonstrate that our method produces spatiotemporally coherent retargeting results even for challenging examples with complex camera and object motion, which are difficult to handle with previous techniques.

原文English
頁(從 - 到)1-10
頁數10
期刊ACM Transactions on Graphics
28
發行號5
DOIs
出版狀態Published - 1 12月 2009

指紋

深入研究「Motion-Aware Temporal Coherence for Video Resizing」主題。共同形成了獨特的指紋。

引用此