摘要
The key to high-quality video resizing is preserving the shape and motion of visually salient objects while remaining temporallycoherent. These spatial and temporal requirements are difficult to reconcile, typically leading existing video retargeting methods to sacrifice one of them and causing distortion or waving artifacts. Recent work enforces temporal coherence of content-aware video warping by solving a global optimization problem over the entire video cube. This significantly improves the results but does not scale well with the resolution and length of the input video and quickly becomes intractable. We propose a new method that solves the scalability problem without compromising the resizing quality. Our method factors the problem into spatial and time/motion components: we first resize each frame independently to preserve the shape of salient regions, and then we optimize their motion using a reduced model for each pathline of the optical flow. This factorization decomposes the optimization of the video cube into sets of subproblems whose size is proportional to a single frame's resolution and which can be solved in parallel. We also show how to incorporate cropping into our optimization, which is useful for scenes with numerous salient objects where warping alone would degenerate to linear scaling. Our results match the quality of state-of-the-art retargeting methods while dramatically reducing the computation time and memory consumption, making content-aware video resizing scalable and practical.
原文 | English |
---|---|
頁(從 - 到) | 1-8 |
頁數 | 8 |
期刊 | ACM Transactions on Graphics |
卷 | 30 |
發行號 | 4 |
DOIs | |
出版狀態 | Published - 1 7月 2011 |