Efficient background modeling using nonparametric histogramming

Horng Horng Lin, Li Chen Shih, Jen-Hui Chuang

    Research output: Contribution to conferencePaperpeer-review

    3 Scopus citations

    Abstract

    With rapid increase in the deployment of high-definition surveillance cameras, the need of efficient video analytics for extracting video objects from high-resolution surveillance videos in real time has become more and more demanding. Conventional background modeling methods, e.g., the Gaussian mixture modeling (GMM), although having long been proven to be effective for foreground object extraction, are actually not efficient enough for the real-time analysis of high-resolution videos. We thus propose a novel background modeling approach using nonparametric histogramming that can derive a holistic, histogram-based background model for each pixel with low computational complexity. Due to the simple algorithm design, the proposed approach can be easily implemented by fixed-point computation. Without using any accelerator (like CUDA, Intel SIMD, or Intel IPP library), multi-threading or sub-sampling technique, our implementation of the proposed algorithm achieves high efficiency for the processing of 1920×1080 color videos at ∼18.81 fps on a general computer (Intel Core i7 3.4GHz CPU). In the experimental comparisons, the proposed approach is ∼3.9 times faster than the GMM, while giving comparable foreground segmentation results.

    Original languageEnglish
    DOIs
    StatePublished - 1 Jan 2013
    Event2013 7th International Conference on Distributed Smart Cameras, ICDSC 2013 - Palm Springs, CA, United States
    Duration: 29 Oct 20131 Nov 2013

    Conference

    Conference2013 7th International Conference on Distributed Smart Cameras, ICDSC 2013
    Country/TerritoryUnited States
    CityPalm Springs, CA
    Period29/10/131/11/13

    Fingerprint

    Dive into the research topics of 'Efficient background modeling using nonparametric histogramming'. Together they form a unique fingerprint.

    Cite this