Static2Dynamic: Video Inference from a Deep Glimpse

Yu Ying Yeh, Yen Cheng Liu, Wei Chen Chiu, Yu Chiang Frank Wang

研究成果: Article同行評審

4 引文 斯高帕斯(Scopus)

摘要

In this article, we address a novel and challenging task of video inference, which aims to infer video sequences from given non-consecutive video frames. Taking such frames as the anchor inputs, our focus is to recover possible video sequence outputs based on the observed anchor frames at the associated time. With the proposed Stochastic and Recurrent Conditional GAN (SR-cGAN), we are able to preserve visual content across video frames with additional ability to handle possible temporal ambiguity. In the experiments, we show that our SR-cGAN not only produces preferable video inference results, it can also be applied to relevant tasks of video generation, video interpolation, video inpainting, and video prediction.

原文English
文章編號9099414
頁(從 - 到)440-449
頁數10
期刊IEEE Transactions on Emerging Topics in Computational Intelligence
4
發行號4
DOIs
出版狀態Published - 8月 2020

指紋

深入研究「Static2Dynamic: Video Inference from a Deep Glimpse」主題。共同形成了獨特的指紋。

引用此