TY - GEN
T1 - Neural architecture search using progressive evolution
AU - Sinha, Nilotpal
AU - Chen, Kuan Wen
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/7/8
Y1 - 2022/7/8
N2 - Vanilla neural architecture search using evolutionary algorithms (EA) involves evaluating each architecture by training it from scratch, which is extremely time-consuming. This can be reduced by using a supernet to estimate the fitness of every architecture in the search space due to its weight sharing nature. However, the estimated fitness is very noisy due to the co-adaptation of the operations in the supernet. In this work, we propose a method called pEvoNAS wherein the whole neural architecture search space is progressively reduced to smaller search space regions with good architectures. This is achieved by using a trained supernet for architecture evaluation during the architecture search using genetic algorithm to find search space regions with good architectures. Upon reaching the final reduced search space, the supernet is then used to search for the best architecture in that search space using evolution. The search is also enhanced by using weight inheritance wherein the supernet for the smaller search space inherits its weights from previous trained supernet for the bigger search space. Experimentally, pEvoNAS gives better results on CIFAR-10 and CIFAR-100 while using significantly less computational resources as compared to previous EA-based methods. The code for our paper can be found here.
AB - Vanilla neural architecture search using evolutionary algorithms (EA) involves evaluating each architecture by training it from scratch, which is extremely time-consuming. This can be reduced by using a supernet to estimate the fitness of every architecture in the search space due to its weight sharing nature. However, the estimated fitness is very noisy due to the co-adaptation of the operations in the supernet. In this work, we propose a method called pEvoNAS wherein the whole neural architecture search space is progressively reduced to smaller search space regions with good architectures. This is achieved by using a trained supernet for architecture evaluation during the architecture search using genetic algorithm to find search space regions with good architectures. Upon reaching the final reduced search space, the supernet is then used to search for the best architecture in that search space using evolution. The search is also enhanced by using weight inheritance wherein the supernet for the smaller search space inherits its weights from previous trained supernet for the bigger search space. Experimentally, pEvoNAS gives better results on CIFAR-10 and CIFAR-100 while using significantly less computational resources as compared to previous EA-based methods. The code for our paper can be found here.
KW - Neural architecture search
KW - genetic algorithm
KW - supernet
UR - http://www.scopus.com/inward/record.url?scp=85135218093&partnerID=8YFLogxK
U2 - 10.1145/3512290.3528707
DO - 10.1145/3512290.3528707
M3 - Conference contribution
AN - SCOPUS:85135218093
T3 - GECCO 2022 - Proceedings of the 2022 Genetic and Evolutionary Computation Conference
SP - 1093
EP - 1101
BT - GECCO 2022 - Proceedings of the 2022 Genetic and Evolutionary Computation Conference
PB - Association for Computing Machinery, Inc
T2 - 2022 Genetic and Evolutionary Computation Conference, GECCO 2022
Y2 - 9 July 2022 through 13 July 2022
ER -