Unseen Object Segmentation in Videos via Transferable Representations

Yi Wen Chen, Yi Hsuan Tsai, Chu Ya Yang, Yen Yu Lin, Ming Hsuan Yang

研究成果: Chapter同行評審

1 引文 斯高帕斯(Scopus)

摘要

In order to learn object segmentation models in videos, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that better adapt to the target frames. The entire process is decomposed into two tasks: 1) solving a submodular function for selecting object-like segments, and 2) learning a CNN model with a transferable module for adapting seen categories in the source domain to the unseen target video. We present an iterative update scheme between two tasks to self-learn the final solution for object segmentation. Experimental results on numerous benchmark datasets show that the proposed method performs favorably against the state-of-the-art algorithms.
原文American English
主出版物標題Lecture Notes in Computer Science
發行者Springer Verlag
頁面615-631
頁數17
ISBN(列印)9783030208691
DOIs
出版狀態Published - 2019

出版系列

名字Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
11364 LNCS

指紋

深入研究「Unseen Object Segmentation in Videos via Transferable Representations」主題。共同形成了獨特的指紋。

引用此