TY - JOUR
T1 - Neuroevolution-based efficient field effect transistor compact device models
AU - Ho, Ya Wen
AU - Rawat, Tejender Singh
AU - Yang, Zheng Kai
AU - Pratik, Sparsh
AU - Lai, Guan Wen
AU - Tu, Yen Liang
AU - Lin, Albert
N1 - Publisher Copyright:
Author
PY - 2021
Y1 - 2021
N2 - Artificial neural networks (ANN) and multilayer perceptrons (MLP) have proved to be efficient in terms of designing highly accurate semiconductor device compact models (CM). Their ability to update their weight and biases through the backpropagation method makes them highly useful in learning the task. To improve the learning, MLP usually requires large networks and thus a large number of model parameters, which significantly increases the simulation time in circuit simulation. Hence, optimizing the network architecture and topology is always a tedious yet important task. In this work, we tune the network topology using neuro-evolution (NE) to develop semiconductor device CMs. With input and output layers defined, we have allowed a genetic algorithm (GA), a gradient-free algorithm, to tune the network architecture in combination with Adam, a gradient-based backpropagation algorithm, for the network weight and bias optimization. In addition, we implemented the MLP model using a similar number of parameters as the baseline for comparison. It is observed that in most of the cases, the NE models exhibit a lower root mean square error (RMSE) and require fewer training epochs compared to the MLP baseline models. For instance, for patience number 10 with different number of model parameters, the RMSE for test dataset using NE and MLP in unit of log(ampere) are 0.1461, 0.0985, 0.1274, 0.0971, 0.0705, and 0.2254, 0.1423, 0.1429, 0.1425, 0.1391, respectively, for the 28nm technology node at foundry.
AB - Artificial neural networks (ANN) and multilayer perceptrons (MLP) have proved to be efficient in terms of designing highly accurate semiconductor device compact models (CM). Their ability to update their weight and biases through the backpropagation method makes them highly useful in learning the task. To improve the learning, MLP usually requires large networks and thus a large number of model parameters, which significantly increases the simulation time in circuit simulation. Hence, optimizing the network architecture and topology is always a tedious yet important task. In this work, we tune the network topology using neuro-evolution (NE) to develop semiconductor device CMs. With input and output layers defined, we have allowed a genetic algorithm (GA), a gradient-free algorithm, to tune the network architecture in combination with Adam, a gradient-based backpropagation algorithm, for the network weight and bias optimization. In addition, we implemented the MLP model using a similar number of parameters as the baseline for comparison. It is observed that in most of the cases, the NE models exhibit a lower root mean square error (RMSE) and require fewer training epochs compared to the MLP baseline models. For instance, for patience number 10 with different number of model parameters, the RMSE for test dataset using NE and MLP in unit of log(ampere) are 0.1461, 0.0985, 0.1274, 0.0971, 0.0705, and 0.2254, 0.1423, 0.1429, 0.1425, 0.1391, respectively, for the 28nm technology node at foundry.
KW - Fitting
KW - Genetic algorithms
KW - Integrated circuit modeling
KW - Machine learning
KW - Mathematical models
KW - Metal oxide semiconductor (MOS)
KW - Neuroevolution
KW - Physics
KW - Semiconductor device compact model
KW - Semiconductor devices
KW - Topology
UR - http://www.scopus.com/inward/record.url?scp=85120088240&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2021.3130254
DO - 10.1109/ACCESS.2021.3130254
M3 - Article
AN - SCOPUS:85120088240
SN - 2169-3536
JO - IEEE Access
JF - IEEE Access
ER -