TY - GEN
T1 - A unipolar-based stochastic lif neuron design for low-cost spiking neural network
AU - Chen, Kun Chih
AU - Kuo, Tze Ling
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/8/23
Y1 - 2021/8/23
N2 - The Deep Neural Network (DNN) already shows the superiority in many real-world applications. Nevertheless, due to the high dense neuron computing, the high-power issue is the main design challenge to implement the DNN hardware. To solve the power problem of DNN, the Spiking Neural Network (SNN) has been proposed to reduce the power consumption of the conventional numerical operations in DNNs through spike transmission. However, it is difficult to implement large-scale SNNs because of the intrinsic feature of non-differential neuron operations. In this paper, we apply the unipolar-based Stochastic Computing (SC) method to build an SNN neuron model because the SC encoding method is similar to the rate coding in SNNs. The SC-based SNN can not only improve the efficiency of calculation but reduce the SNN design barrier. In order to further improve the computing accuracy, we apply the pruning-based spike blocking method to the proposed SC-based SNN. Compared with the non-SC SNN methods, the proposed SC-based SNN can reduce system power consumption by about 81.37% to 90.58% and reduce area cost by around 72.38% to 75.64%. In addition, the proposed SC-based SNN can improve 10% computing accuracy with smaller area overhead by 64.77% and lower power consumption by 61.26% than the current SC-based SNN design.
AB - The Deep Neural Network (DNN) already shows the superiority in many real-world applications. Nevertheless, due to the high dense neuron computing, the high-power issue is the main design challenge to implement the DNN hardware. To solve the power problem of DNN, the Spiking Neural Network (SNN) has been proposed to reduce the power consumption of the conventional numerical operations in DNNs through spike transmission. However, it is difficult to implement large-scale SNNs because of the intrinsic feature of non-differential neuron operations. In this paper, we apply the unipolar-based Stochastic Computing (SC) method to build an SNN neuron model because the SC encoding method is similar to the rate coding in SNNs. The SC-based SNN can not only improve the efficiency of calculation but reduce the SNN design barrier. In order to further improve the computing accuracy, we apply the pruning-based spike blocking method to the proposed SC-based SNN. Compared with the non-SC SNN methods, the proposed SC-based SNN can reduce system power consumption by about 81.37% to 90.58% and reduce area cost by around 72.38% to 75.64%. In addition, the proposed SC-based SNN can improve 10% computing accuracy with smaller area overhead by 64.77% and lower power consumption by 61.26% than the current SC-based SNN design.
KW - Leaky integrate-and-fire
KW - Snn
KW - Spiking neural network
KW - Stochastic computing
UR - http://www.scopus.com/inward/record.url?scp=85115416725&partnerID=8YFLogxK
U2 - 10.1109/COINS51742.2021.9524272
DO - 10.1109/COINS51742.2021.9524272
M3 - Conference contribution
AN - SCOPUS:85115416725
T3 - 2021 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2021
BT - 2021 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2021
Y2 - 23 August 2021 through 25 August 2021
ER -