Sharing Model Framework for Zero-Shot Sketch-Based Image Retrieval

Yi Hsuan Ho, Der Lor Way, Zen Chung Shih

Research output: Contribution to journalArticlepeer-review


Sketch-based image retrieval (SBIR) is an emerging task in computer vision. Research interests have arisen in solving this problem under the realistic and challenging setting of zero-shot learning. Given a sketch as a query, the search goal is to retrieve the corresponding photographs in a zero-shot scenario. In this paper, we divide the aforementioned challenging work into three tasks and propose a sharing model framework that addresses these problems. First, the weights of the proposed sharing model effectively reduced the modality gap between sketches and photographs. Second, semantic information was used to handle different label spaces during the training and testing stages. The sketch and photograph domains share semantic information. Finally, a memory mechanism is used to reduce the intrinsic variety in sketches, even if they all belong to the same class. Sketches and photographs dominate the embeddings in turn. Because sketches are not limited by language, our ultimate goal is to find a method to replace text searches. We also designed a demonstration program to demonstrate the use of the proposed method in real-world applications. Our results indicate that the proposed method exhibits considerably higher zero-shot SBIR performance than do other state-of-the-art methods on the challenging Sketchy, TU-Berlin, and QuickDraw datasets.

Original languageEnglish
Article numbere14947
JournalComputer Graphics Forum
Issue number7
StatePublished - Oct 2023


  • CCS Concepts
  • • Computing methodologies → Machine learning
  • • Information system → Information retrieval


Dive into the research topics of 'Sharing Model Framework for Zero-Shot Sketch-Based Image Retrieval'. Together they form a unique fingerprint.

Cite this