Image co-saliency detection via locally adaptive saliency map fusion

Chung Chi Tsai, Xiaoning Qian, Yen Yu Lin

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

11 Scopus citations

Abstract

Co-saliency detection aims at discovering the common and salient objects in multiple images. It explores not only intra-image but extra inter-image visual cues, and hence compensates the shortages in single-image saliency detection. The performance of co-saliency detection substantially relies on the explored visual cues. However, the optimal cues typically vary from region to region. To address this issue, we develop an approach that detects co-salient objects by region-wise saliency map fusion. Specifically, our approach takes intra-image appearance, inter-image correspondence, and spatial consistence into account, and accomplishes saliency detection with locally adaptive saliency map fusion via solving an energy optimization problem over a graph. It is evaluated on a benchmark dataset and compared to the state-of-the-art methods. Promising results demonstrate its effectiveness and superiority.
Original languageAmerican English
Title of host publication2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1897-1901
Number of pages5
ISBN (Print)9781509041176
DOIs
StatePublished - 16 Jun 2017

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

Keywords

  • Co-saliency detection
  • energy minimization
  • graph-based optimization
  • locally adaptive fusion

Fingerprint

Dive into the research topics of 'Image co-saliency detection via locally adaptive saliency map fusion'. Together they form a unique fingerprint.

Cite this