摘要
In this article, we address a novel and challenging task of video inference, which aims to infer video sequences from given non-consecutive video frames. Taking such frames as the anchor inputs, our focus is to recover possible video sequence outputs based on the observed anchor frames at the associated time. With the proposed Stochastic and Recurrent Conditional GAN (SR-cGAN), we are able to preserve visual content across video frames with additional ability to handle possible temporal ambiguity. In the experiments, we show that our SR-cGAN not only produces preferable video inference results, it can also be applied to relevant tasks of video generation, video interpolation, video inpainting, and video prediction.
原文 | English |
---|---|
文章編號 | 9099414 |
頁(從 - 到) | 440-449 |
頁數 | 10 |
期刊 | IEEE Transactions on Emerging Topics in Computational Intelligence |
卷 | 4 |
發行號 | 4 |
DOIs | |
出版狀態 | Published - 8月 2020 |