Mixed Stage Partial Network and Background Data Augmentation for Surveillance Object Detection

Ping Yang Chen, Jun Wei Hsieh*, Munkhjargal Gochoo, Yong Sheng Chen

*此作品的通信作者

研究成果: Article同行評審

6 引文 斯高帕斯(Scopus)

摘要

State-of-the-art (SoTA) object detection models and their accuracy have been improved by a large margin via CNNs (Convolutional Neural Networks); however, these models still perform poorly for small road objects. Moreover, the SoTA models are mainly trained on public benchmark datasets such as MS COCO, which include more complicated backgrounds and thus make them robust for object detection. However, for surveillance or road videos, their monotone backgrounds make these SoTA detectors background-over-fitted. In applications such as autonomous driving or traffic flow estimation, the background-over-fitting problem will increase various challenges and lead to accuracy degradation in object detection. One novelty of this paper is to propose an MBA (Mixed Background Augmentation) method to improve detection accuracy without adding new labeling efforts and any pre-training processes. During the inference stage, only one input image is needed for vehicle detection without involving background subtraction. Another novelty of this paper is the design of an efficient MSP (Mixed Stage Partial) network to detect objects more accurately and efficiently from surveillance videos. Extensive experiments on KITTI and UA-DETRAC benchmarks show that the proposed method achieves the SoTA results for highly accurate and efficient vehicle detection. The detection accuracy is improved from 78.53% to 83.59% with 25.7 $fps$ on the UA-DETRAC data set. The implementation code is available at https://github.com/pingyang1117/MSPNet.

原文English
頁(從 - 到)23533-23547
頁數15
期刊IEEE Transactions on Intelligent Transportation Systems
23
發行號12
DOIs
出版狀態Published - 1 12月 2022

指紋

深入研究「Mixed Stage Partial Network and Background Data Augmentation for Surveillance Object Detection」主題。共同形成了獨特的指紋。

引用此