An 1-bit by 1-bit High Parallelism In-RRAM Macro with Co-Training Mechanism for DCNN Applications

Chi Liu, Shao Tzu Li, Tong Lin Pan, Cheng En Ni, Yun Sung, Chia Lin Hu, Kang Yu Chang, Tuo Hung Hou, Tian Sheuan Chang, Shyh Jye Jou*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

A methodology for Artificial Intelligence (AI) edge Deep Convolutional Neural Network (DCNN) hardware design to increase computation parallelism and decrease latency is needed for a real time application. To increase the computation parallelism, a 1-bit by 1-bit high parallelism in-RRAM computing (IRC) macro is proposed. The goal of this testing macro is to test the characteristic of the RRAM and propose a co-training mechanism between DCNN algorithm and RRAM module to deal with the non-linearity issues of IRC.

Original languageEnglish
Title of host publication2022 International Symposium on VLSI Design, Automation and Test, VLSI-DAT 2022 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665409216
DOIs
StatePublished - 2022
Event2022 International Symposium on VLSI Design, Automation and Test, VLSI-DAT 2022 - Hsinchu, Taiwan
Duration: 18 Apr 202221 Apr 2022

Publication series

Name2022 International Symposium on VLSI Design, Automation and Test, VLSI-DAT 2022 - Proceedings

Conference

Conference2022 International Symposium on VLSI Design, Automation and Test, VLSI-DAT 2022
Country/TerritoryTaiwan
CityHsinchu
Period18/04/2221/04/22

Keywords

  • CIFAR-10
  • Co-Training
  • Computing In-Memory
  • In-RRAM Computing

Fingerprint

Dive into the research topics of 'An 1-bit by 1-bit High Parallelism In-RRAM Macro with Co-Training Mechanism for DCNN Applications'. Together they form a unique fingerprint.

Cite this