Scalable and coherent video resizing with per-frame optimization

Yu-Shuen Wang*, Jen Hung Hsiao, Olga Sorkine, Tong Yee Lee


研究成果: Conference contribution同行評審

45 引文 斯高帕斯(Scopus)


The key to high-quality video resizing is preserving the shape and motion of visually salient objects while remaining temporallycoherent. These spatial and temporal requirements are difficult to reconcile, typically leading existing video retargeting methods to sacrifice one of them and causing distortion or waving artifacts. Recent work enforces temporal coherence of content-aware video warping by solving a global optimization problem over the entire video cube. This significantly improves the results but does not scale well with the resolution and length of the input video and quickly becomes intractable. We propose a new method that solves the scalability problem without compromising the resizing quality. Our method factors the problem into spatial and time/motion components: we first resize each frame independently to preserve the shape of salient regions, and then we optimize their motion using a reduced model for each pathline of the optical flow. This factorization decomposes the optimization of the video cube into sets of subproblems whose size is proportional to a single frame's resolution and which can be solved in parallel. We also show how to incorporate cropping into our optimization, which is useful for scenes with numerous salient objects where warping alone would degenerate to linear scaling. Our results match the quality of state-of-the-art retargeting methods while dramatically reducing the computation time and memory consumption, making content-aware video resizing scalable and practical.

主出版物標題Proceedings of ACM SIGGRAPH 2011, SIGGRAPH 2011
出版狀態Published - 1 7月 2011
事件ACM SIGGRAPH 2011, SIGGRAPH 2011 - Vancouver, BC, Canada
持續時間: 7 8月 201111 8月 2011


ConferenceACM SIGGRAPH 2011, SIGGRAPH 2011
城市Vancouver, BC


深入研究「Scalable and coherent video resizing with per-frame optimization」主題。共同形成了獨特的指紋。