Unseen Object Segmentation in Videos via Transferable Representations

Yi Wen Chen, Yi Hsuan Tsai, Chu Ya Yang, Yen Yu Lin, Ming Hsuan Yang

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

1 Scopus citations

Abstract

In order to learn object segmentation models in videos, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that better adapt to the target frames. The entire process is decomposed into two tasks: 1) solving a submodular function for selecting object-like segments, and 2) learning a CNN model with a transferable module for adapting seen categories in the source domain to the unseen target video. We present an iterative update scheme between two tasks to self-learn the final solution for object segmentation. Experimental results on numerous benchmark datasets show that the proposed method performs favorably against the state-of-the-art algorithms.
Original languageAmerican English
Title of host publicationLecture Notes in Computer Science
PublisherSpringer Verlag
Pages615-631
Number of pages17
ISBN (Print)9783030208691
DOIs
StatePublished - 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11364 LNCS

Fingerprint

Dive into the research topics of 'Unseen Object Segmentation in Videos via Transferable Representations'. Together they form a unique fingerprint.

Cite this