Q-YOLOP: Quantization-Aware You only Look Once for Panoptic Driving Perception

Chi Chih Chang, Wei Cheng Lin, Pei Shuo Wang, Sheng Feng Yu, Yu Chen Lu, Kuan Cheng Lin, Kai Chiang Wu

研究成果: Conference contribution同行評審

3 引文 斯高帕斯(Scopus)

摘要

In this work, we present an efficient and quantization-aware panoptic driving perception model (Q-YOLOP) for object detection, drivable area segmentation, and lane line segmentation, in the context of autonomous driving. Our model employs the Efficient Layer Aggregation Network (ELAN) as its backbone and task-specific heads for each task. We employ a four-stage training process that includes pretraining on the BDD100K dataset, finetuning on both the BDD100K and iVS datasets, and quantization-aware training (QAT) on BDD100K. During the training process, we use powerful data augmentation techniques, such as random perspective and mosaic, and train the model on a combination of the BDD100K and iVS datasets. Both strategies enhance the model's generalization capabilities. The proposed model achieves state-of-the-art performance with an [email protected] of 0.622 for object detection and an mIoU of 0.612 for segmentation, while maintaining low computational and memory requirements.

原文English
主出版物標題Proceedings - 2023 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2023
發行者Institute of Electrical and Electronics Engineers Inc.
頁面52-56
頁數5
ISBN(電子)9798350313154
DOIs
出版狀態Published - 2023
事件2023 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2023 - Brisbane, 澳大利亞
持續時間: 10 7月 202314 7月 2023

出版系列

名字Proceedings - 2023 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2023

Conference

Conference2023 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2023
國家/地區澳大利亞
城市Brisbane
期間10/07/2314/07/23

指紋

深入研究「Q-YOLOP: Quantization-Aware You only Look Once for Panoptic Driving Perception」主題。共同形成了獨特的指紋。

引用此