Abstract
An adaptive electronic neural network processor has been developed for high-speed image compression based upon a frequency-sensitive self-organization algorithm. Performances of this self-organization network and a conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results. The neural network processor includes a pipelined codebook generator and a paralleled vector quantizer, which obtains a time complexity O(1) for each quantization vector. A mixed-signal design technique with analog circuitry to perform neural computation and digital circuitry to process multiple-bit address information is used. The prototyping neural network processor chip for a 25-dimensional adaptive vector quantizer of 64 code words was designed, fabricated, and tested. It includes 25 input neurons, 25 × G4 synapse cells, 64 distortion-computing neurons, a winner-take-all circuit block, and a digital index encoder. It occupies a silicon area of 4.6 x 6.8mm2 in a 2.0-µm scalable CMOS technology and provides a computing capability as high as 3.2 billion connections per second. The experimental results for this neural-based vector quantizer chip and the winner-take-all circuit test structure are also presented.
Original language | English |
---|---|
Pages (from-to) | 506-518 |
Number of pages | 13 |
Journal | IEEE Transactions on Neural Networks |
Volume | 3 |
Issue number | 3 |
DOIs | |
State | Published - 1 Jan 1992 |