Co-attention CNNs for unsupervised object co-segmentation

Kuang Jui Hsu, Yen Yu Lin, Yung Yu Chuang

研究成果: Chapter同行評審

65 引文 斯高帕斯(Scopus)

摘要

Object co-segmentation aims to segment the common objects in images. This paper presents a CNN-based method that is unsupervised and end-to-end trainable to better solve this task. Our method is unsupervised in the sense that it does not require any training data in the form of object masks but merely a set of images jointly covering objects of a specific class. Our method comprises two collaborative CNN modules, a feature extractor and a co-attention map generator. The former module extracts the features of the estimated objects and backgrounds, and is derived based on the proposed co-attention loss which minimizes inter-image object discrepancy while maximizing intra-image figure-ground separation. The latter module is learned to generated co-attention maps by which the estimated figure-ground segmentation can better fit the former module. Besides, the co-attention loss, the mask loss is developed to retain the whole objects and remove noises. Experiments show that our method achieves superior results, even outperforming the state-of-the-art, supervised methods.
原文American English
主出版物標題IJCAI'18: Proceedings of the 27th International Joint Conference on Artificial Intelligence
發行者International Joint Conferences on Artificial Intelligence
頁面748-756
頁數9
ISBN(列印)9780999241127
出版狀態Published - 7月 2018

出版系列

名字IJCAI International Joint Conference on Artificial Intelligence
2018-July

指紋

深入研究「Co-attention CNNs for unsupervised object co-segmentation」主題。共同形成了獨特的指紋。

引用此