TY - GEN
T1 - Efficient Constraint-Aware Neural Architecture Search for Object Detection
AU - Poliakov, Egor
AU - Hung, Wei Jie
AU - Huang, Ching Chun
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - We propose an efficient Neural Architecture Search (NAS) method, named Zero-DNAS, for object detection tasks capable of discovering a suitable architecture under given memory and FLOPs constraints. NAS aims to explore the search space automatically to discover the best-performing network architectures for a given task. However, NAS is resource-consuming and usually requires hundreds of hours of GPU computations to discover a neural network architecture with good performance. Especially for more extensive and complex computer vision tasks such as object detection, the computing time and memory when conducting NAS would increase dramatically. Practically, the conventional sampling-based NAS methods do not guarantee the best possible solutions, whereas differentiable methods require substantial memory resources, making it challenging to apply in macro search space settings. In comparison, we propose a differentiable NAS paradigm with zero-cost proxy metrics and aims to determine the architecture within the constraints of memory and FLOPS. The experiments on object detection datasets show that our proposed algorithm can discover more accurate and faster architectures in a heavy macro search space in less than 2 NVIDIA 2080TI GPU hours.
AB - We propose an efficient Neural Architecture Search (NAS) method, named Zero-DNAS, for object detection tasks capable of discovering a suitable architecture under given memory and FLOPs constraints. NAS aims to explore the search space automatically to discover the best-performing network architectures for a given task. However, NAS is resource-consuming and usually requires hundreds of hours of GPU computations to discover a neural network architecture with good performance. Especially for more extensive and complex computer vision tasks such as object detection, the computing time and memory when conducting NAS would increase dramatically. Practically, the conventional sampling-based NAS methods do not guarantee the best possible solutions, whereas differentiable methods require substantial memory resources, making it challenging to apply in macro search space settings. In comparison, we propose a differentiable NAS paradigm with zero-cost proxy metrics and aims to determine the architecture within the constraints of memory and FLOPS. The experiments on object detection datasets show that our proposed algorithm can discover more accurate and faster architectures in a heavy macro search space in less than 2 NVIDIA 2080TI GPU hours.
UR - http://www.scopus.com/inward/record.url?scp=85180007230&partnerID=8YFLogxK
U2 - 10.1109/APSIPAASC58517.2023.10317340
DO - 10.1109/APSIPAASC58517.2023.10317340
M3 - Conference contribution
AN - SCOPUS:85180007230
T3 - 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2023
SP - 733
EP - 740
BT - 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2023
Y2 - 31 October 2023 through 3 November 2023
ER -