Residue systolic implementations for neural networks

C. N. Zhang*, M. Wang, Chien-Chao Tseng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


In this work we propose two techniques for improving VLSI implementations for artificial neural networks (ANNs). By making use of two kinds of processing elements (PEs), one dedicated to the basic operations (addition and multiplication) and another to evaluate the activation function, the total time and cost for the VLSI array implementation of ANNs can be decreased by a factor of two compared with previous work. Taking the advantage of residue number system, the efficiency of each PE can be further increased. Two RNS- based array processor designs are proposed. The first is built by look-up tables, and the second is constructed by binary adders accomplished by the mixed- radix conversion (MRC), such that the hardwares are simple and high speed operations are obtained. The proposed techniques are general enough to be extended to cover other forms of loading and learning algorithms.

Original languageEnglish
Pages (from-to)149-156
Number of pages8
JournalNeural Computing & Applications
Issue number3
StatePublished - Sep 1995


  • Mixed-radix conversion
  • Neural network
  • Parallel processing
  • Residue number system
  • Systolic array


Dive into the research topics of 'Residue systolic implementations for neural networks'. Together they form a unique fingerprint.

Cite this