TY - GEN
T1 - Summary of the 2023 Low-Power Deep Learning Object Detection and Semantic Segmentation Multitask Model Compression Competition for Traffic Scene in Asian Countries
AU - Ni, Yu Shu
AU - Tsai, Chia Chi
AU - Chen, Chih Cheng
AU - Kuo, Hsien Kai
AU - Chen, Po Yu
AU - Hu, Po Chi
AU - Kuo, Ted T.
AU - Hwang, Jenq Neng
AU - Guo, Jiun In
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The competition endeavors to accomplish object detection and semantic segmentation traffic objects in Asia, notably in countries such as Taiwan, using low power while achieving a high mean intersection over union (mIOU) and high mean average precision (mAP). This task is challenging as it must be achieved in harsh driving environments. The targeted semantic segmentation objects include several types of lines such as dashed and single white and yellow lines, double dashed and double white and yellow lines, main and alter lanes and the targeted object detection objects include car, pedestrian, motorbike, and bike. To train the model, the participants utilized a total of 35,500 annotated images which were obtained from the iVS-ODSEG-Dataset, a revised version of the Berkeley Deep Drive 100K[1], along with 89,002 annotated training images from iVS-Dataset[2]. Additionally, 130 annotated images have been provided as validation examples from Asian road conditions. The contest evaluation process uses 9,612 testing images, of which 4,900 are used in the qualification stage competition, and the remaining are used in the final stage competition. The competition had 223 registered teams, with the top 15 teams that achieved the highest performance mIOU and mAP entering the final stage competition. Based on the score evaluation and the invited paper review result, the team 'Polybahn' produced the overall best model and the best INT8 model development, followed by team 'You Only Lowpower Once' and team 'ACVLab.'
AB - The competition endeavors to accomplish object detection and semantic segmentation traffic objects in Asia, notably in countries such as Taiwan, using low power while achieving a high mean intersection over union (mIOU) and high mean average precision (mAP). This task is challenging as it must be achieved in harsh driving environments. The targeted semantic segmentation objects include several types of lines such as dashed and single white and yellow lines, double dashed and double white and yellow lines, main and alter lanes and the targeted object detection objects include car, pedestrian, motorbike, and bike. To train the model, the participants utilized a total of 35,500 annotated images which were obtained from the iVS-ODSEG-Dataset, a revised version of the Berkeley Deep Drive 100K[1], along with 89,002 annotated training images from iVS-Dataset[2]. Additionally, 130 annotated images have been provided as validation examples from Asian road conditions. The contest evaluation process uses 9,612 testing images, of which 4,900 are used in the qualification stage competition, and the remaining are used in the final stage competition. The competition had 223 registered teams, with the top 15 teams that achieved the highest performance mIOU and mAP entering the final stage competition. Based on the score evaluation and the invited paper review result, the team 'Polybahn' produced the overall best model and the best INT8 model development, followed by team 'You Only Lowpower Once' and team 'ACVLab.'
KW - Semantic segmentation
KW - autonomous driving
KW - embedded deep learning
KW - object detection
UR - http://www.scopus.com/inward/record.url?scp=85172319447&partnerID=8YFLogxK
U2 - 10.1109/ICMEW59549.2023.00012
DO - 10.1109/ICMEW59549.2023.00012
M3 - Conference contribution
AN - SCOPUS:85172319447
T3 - Proceedings - 2023 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2023
SP - 34
EP - 39
BT - Proceedings - 2023 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2023
Y2 - 10 July 2023 through 14 July 2023
ER -