A NOVEL VIDEO ANNOTATION FRAMEWORK USING NEAR-DUPLICATE SEGMENT DETECTION

Chien Li Chou, Hua-Tsung Chen, Chun Chieh Hsu, Suh-Yin Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The traditional video annotation approaches focus on annotating keyframes, shots, or the whole video with semantic keywords. However, the extractions of keyframes and shots lack of semantic meanings, and it is hard to use a few keywords to describe a video by using multiple topics. Therefore, we propose a novel video annotation framework using near-duplicate segment detection not only to preserve but also to purify the semantic meanings of target annotation units. A hierarchical near-duplicate segment detection method is proposed to efficiently localize near-duplicate segments in frame-level. Videos containing near-duplicate segments are clustered and keyword distributions of clusters are analyzed. Finally, the keywords ranked according to keyword distribution scores are annotated onto the obtained annotation units. Comprehensive experiments demonstrate the effectiveness of the proposed video annotation framework and near-duplicate segment detection method.
Original languageEnglish
Title of host publicationIEEE International Conference on Multimedia & Expo Workshops (ICMEW)
StatePublished - 2015

Keywords

  • video annotation; automatic annotation; near-duplicate segment detection; web video analysis
  • SEARCH

Fingerprint

Dive into the research topics of 'A NOVEL VIDEO ANNOTATION FRAMEWORK USING NEAR-DUPLICATE SEGMENT DETECTION'. Together they form a unique fingerprint.

Cite this