Radar and Camera Fusion for Object Forecasting in Driving Scenarios

Albert Budi Christian, Yu Hsuan Wu, Chih Yu Lin*, Lan Da Van, Yu Chee Tseng

*此作品的通信作者

研究成果: Conference contribution同行評審

1 引文 斯高帕斯(Scopus)

摘要

In this paper, we propose a sensor fusion architecture that combines data collected by the camera and radars and utilizes radar velocity for road users' trajectory prediction in real-world driving scenarios. This architecture is multi-stage, following the detect-track-predict paradigm. In the detection stage, camera images and radar point clouds are used to detect objects in the vehicle's surroundings by adopting two object detection models. The detected objects are tracked by an online tracking method. We also design a radar association method to extract radar velocity for an object. In the prediction stage, we build a recurrent neural network to process an object's temporal sequence of positions and velocities and predict future trajectories. Experiments on the real-world autonomous driving nuScenes dataset show that the radar velocity mainly affects the center of the bounding box representing the position of an object and thus improves the prediction performance.

原文English
主出版物標題Proceedings - 2022 IEEE 15th International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2022
發行者Institute of Electrical and Electronics Engineers Inc.
頁面105-111
頁數7
ISBN(電子)9781665464994
DOIs
出版狀態Published - 2022
事件15th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2022 - Penang, Malaysia
持續時間: 19 12月 202222 12月 2022

出版系列

名字Proceedings - 2022 IEEE 15th International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2022

Conference

Conference15th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2022
國家/地區Malaysia
城市Penang
期間19/12/2222/12/22

指紋

深入研究「Radar and Camera Fusion for Object Forecasting in Driving Scenarios」主題。共同形成了獨特的指紋。

引用此