TY - JOUR
T1 - Spatiotemporal Dilated Convolution with Uncertain Matching for Video-Based Crowd Estimation
AU - Ma, Yu Jen
AU - Shuai, Hong Han
AU - Cheng, Wen Huang
N1 - Publisher Copyright:
IEEE
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/1/8
Y1 - 2021/1/8
N2 - In this paper, we propose a novel SpatioTemporal convolutional Dense Network (STDNet) to address the video-based crowd counting problem, which contains the decomposition of 3D convolution and the 3D spatiotemporal dilated dense convolution to alleviate the rapid growth of the model size caused by the Conv3D layer. Moreover, since the dilated convolution extracts the multiscale features, we combine the dilated convolution with the channel attention block to enhance the feature representations. Due to the error that occurs from the difficulty of labeling crowds, especially for videos, imprecise or standard-inconsistent labels may lead to poor convergence for the model. To address this issue, we further propose a new patch-wise regression loss (PRL) to improve the original pixel-wise loss. Experimental results on three video-based benchmarks, i.e., the UCSD, Mall and WorldExpo'10 datasets, show that STDNet outperforms both image- A nd video-based state-of-the-art methods. The source codes are released at https://github.com/STDNet/STDNet.
AB - In this paper, we propose a novel SpatioTemporal convolutional Dense Network (STDNet) to address the video-based crowd counting problem, which contains the decomposition of 3D convolution and the 3D spatiotemporal dilated dense convolution to alleviate the rapid growth of the model size caused by the Conv3D layer. Moreover, since the dilated convolution extracts the multiscale features, we combine the dilated convolution with the channel attention block to enhance the feature representations. Due to the error that occurs from the difficulty of labeling crowds, especially for videos, imprecise or standard-inconsistent labels may lead to poor convergence for the model. To address this issue, we further propose a new patch-wise regression loss (PRL) to improve the original pixel-wise loss. Experimental results on three video-based benchmarks, i.e., the UCSD, Mall and WorldExpo'10 datasets, show that STDNet outperforms both image- A nd video-based state-of-the-art methods. The source codes are released at https://github.com/STDNet/STDNet.
KW - Crowd counting
KW - density map regression
KW - dilated convolution
KW - patch-wise regression loss
KW - spatiotemporal modeling
UR - http://www.scopus.com/inward/record.url?scp=85099551586&partnerID=8YFLogxK
U2 - 10.1109/TMM.2021.3050059
DO - 10.1109/TMM.2021.3050059
M3 - Article
AN - SCOPUS:85099551586
SN - 1520-9210
VL - 24
SP - 261
EP - 273
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
ER -