Fully parallel write/read in resistive synaptic array for accelerating on-chip learning

Ligang Gao, I. Ting Wang, Pai Yu Chen, Sarma Vrudhula, Jae Sun Seo, Yu Cao, Tuo-Hung Hou, Shimeng Yu

    研究成果: Article同行評審

    85 引文 斯高帕斯(Scopus)


    A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaO x /TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30 ×speed-up and >30 ×improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.

    頁(從 - 到)1-9
    出版狀態Published - 13 11月 2015


    深入研究「Fully parallel write/read in resistive synaptic array for accelerating on-chip learning」主題。共同形成了獨特的指紋。