D2D: Divide to Detect, A Scale-Aware Framework for On-Road Object Detection Using IR Camera

Van Tin Luu, Vu Hoang Tran, Egor Poliakov, Ching Chun Huang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

In this paper, to solve the problem of inconsistencies between the predictions in today's SOTA object detection networks, which incorporate the pyramid architecture with multi-level prediction, we proposed a scale-aware framework for IR image-based on-road object detection. The proposed framework uses scale-based attention mechanism to assign responsibilities to each feature levels. With this design, each feature level will focus on detecting a certain range of object scales, thereby minimizing the conflict among the predictions in the final result. Compared to Scaled-YOLOv4 baseline, our proposed method can achieve better performance without increasing FPS on FLIR dataset. The experimental results on RGB image-based object detection datasets also show that our proposed method gives good improvements when applied to RGB images.

Original languageEnglish
Title of host publication2023 IEEE International Conference on Consumer Electronics, ICCE 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665491303
DOIs
StatePublished - 2023
Event2023 IEEE International Conference on Consumer Electronics, ICCE 2023 - Las Vegas, United States
Duration: 6 Jan 20238 Jan 2023

Publication series

NameDigest of Technical Papers - IEEE International Conference on Consumer Electronics
Volume2023-January
ISSN (Print)0747-668X

Conference

Conference2023 IEEE International Conference on Consumer Electronics, ICCE 2023
Country/TerritoryUnited States
CityLas Vegas
Period6/01/238/01/23

Keywords

  • Deep learning
  • IR Image
  • Object Detection

Fingerprint

Dive into the research topics of 'D2D: Divide to Detect, A Scale-Aware Framework for On-Road Object Detection Using IR Camera'. Together they form a unique fingerprint.

Cite this