Semantic context detection using audio event fusion: Camera-ready version

Wei Ta Chu*, Wen-Huang Cheng, Ja Ling Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs) are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model) and discriminative (support vector machine (SVM)) approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalEurasip Journal on Applied Signal Processing
Volume2006
DOIs
StatePublished - 30 Mar 2006

Fingerprint

Dive into the research topics of 'Semantic context detection using audio event fusion: Camera-ready version'. Together they form a unique fingerprint.

Cite this