Motion-aware temporal coherence for video resizing

Yu-Shuen Wang*, Hongbo Fu, Olga Sorkine, Tong Yee Lee, Hans Peter Seidel

*此作品的通信作者

研究成果: Conference contribution同行評審

110 引文 斯高帕斯(Scopus)

摘要

Temporal coherence is crucial in content-aware video retargeting. To date, this problem has been addressed by constraining temporally adjacent pixels to be transformed coherently. However, due to the motion-oblivious nature of this simple constraint, the retargeted videos often exhibit flickering or waving artifacts, especially when significant camera or object motions are involved. Since the feature correspondence across frames varies spatially with both camera and object motion, motion-aware treatment of features is required for video resizing. This motivated us to align consecutive frames by estimating interframe camera motion and to constrain relative positions in the aligned frames. To preserve object motion, we detect distinct moving areas of objects across multiple frames and constrain each of them to be resized consistently. We build a complete video resizing framework by incorporating our motion-aware constraints with an adaptation of the scale-and-stretch optimization recently proposed by Wang and colleagues. Our streaming implementation of the framework allows efficient resizing of long video sequences with low memory cost. Experiments demonstrate that our method produces spatiotemporally coherent retargeting results even for challenging examples with complex camera and object motion, which are difficult to handle with previous techniques.

原文English
主出版物標題Proceedings of ACM SIGGRAPH Asia 2009, SIGGRAPH Asia '09
28
版本5
DOIs
出版狀態Published - 1 12月 2009
事件ACM SIGGRAPH Asia 2009, SIGGRAPH Asia '09 - Yokohama, Japan
持續時間: 16 12月 200919 12月 2009

Conference

ConferenceACM SIGGRAPH Asia 2009, SIGGRAPH Asia '09
國家/地區Japan
城市Yokohama
期間16/12/0919/12/09

指紋

深入研究「Motion-aware temporal coherence for video resizing」主題。共同形成了獨特的指紋。

引用此