TY - GEN
T1 - Efficient Vehicle Counting Based on Time-Spatial Images by Neural Networks
AU - Tseng, Yu Yun
AU - Hsu, Tzu Chien
AU - Wu, Yu Fu
AU - Chen, Jen Jee
AU - Tseng, Yu Chee
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - A highly efficient vehicle counting approach based on timespatial images with deep learning is proposed in this paper. Most vehicle counting solutions are based on frame-by frame object detection and tracking to calculate the number of cars that cross a counting line. However, these approaches incur a great deal of redundancy because they track vehicles in a large area though it matters only when vehicles cross the counting line. In this work, we use time-spatial images to focus only on the information happening along the counting lines, instead of whole images, to reduce redundancy. Due to the nature of time-spatial images, vehicle counting can be achieved by object detection in such images without frame-by-frame tracking. We propose Foreground Favorable Model to conquer occlusion, congestion, and lighting change problems and Cross-Image Object Linking to conquer the distortion problem of nearly static vehicles. We also present an automatic time-spatial image dataset generation flow and the first time-spatial image dataset, called DRIVE-TSI, for vehicle counting tasks. Our vehicle counting accuracy beats state-of-the-art solutions in accuracy and is proved to be much more efficient because it only focuses on a small number of pixels. Our model achieves a 97.95% counting accuracy at 2.91 ms per frame in day time urban scenarios.
AB - A highly efficient vehicle counting approach based on timespatial images with deep learning is proposed in this paper. Most vehicle counting solutions are based on frame-by frame object detection and tracking to calculate the number of cars that cross a counting line. However, these approaches incur a great deal of redundancy because they track vehicles in a large area though it matters only when vehicles cross the counting line. In this work, we use time-spatial images to focus only on the information happening along the counting lines, instead of whole images, to reduce redundancy. Due to the nature of time-spatial images, vehicle counting can be achieved by object detection in such images without frame-by-frame tracking. We propose Foreground Favorable Model to conquer occlusion, congestion, and lighting change problems and Cross-Image Object Linking to conquer the distortion problem of nearly static vehicles. We also present an automatic time-spatial image dataset generation flow and the first time-spatial image dataset, called DRIVE-TSI, for vehicle counting tasks. Our vehicle counting accuracy beats state-of-the-art solutions in accuracy and is proved to be much more efficient because it only focuses on a small number of pixels. Our model achieves a 97.95% counting accuracy at 2.91 ms per frame in day time urban scenarios.
KW - Intelligent transportation system
KW - Neural networks
KW - Time-spatial image
KW - Vehicle counting
UR - http://www.scopus.com/inward/record.url?scp=85122986088&partnerID=8YFLogxK
U2 - 10.1109/MASS52906.2021.00055
DO - 10.1109/MASS52906.2021.00055
M3 - Conference contribution
AN - SCOPUS:85122986088
T3 - Proceedings - 2021 IEEE 18th International Conference on Mobile Ad Hoc and Smart Systems, MASS 2021
SP - 383
EP - 391
BT - Proceedings - 2021 IEEE 18th International Conference on Mobile Ad Hoc and Smart Systems, MASS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE International Conference on Mobile Ad Hoc and Smart Systems, MASS 2021
Y2 - 4 October 2021 through 7 October 2021
ER -