Training time of grid Gaussian networks increases at power order of input dimension

M. H. Lin*, Fu-Chuang Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

We study the problem of training Gaussian grid radial basis function networks to approximate nonlinear mapping over an n-dimensional cube. It is shown that the training process converges under a gradient learning rule. A practical method for selecting the learning rate in the gradient rule is proposed. Then formal analysis is provided to demonstrate that under the gradient rule the training time (in terms of iteration) needed for achieving certain accuracy would increase at a power order of the dimension, even when suitable parallel computing hardware is available. Computer simulations are given to verify this point.

Original languageEnglish
Pages (from-to)113-124
Number of pages12
JournalJournal of the Chinese Institute of Electrical Engineering, Transactions of the Chinese Institute of Engineers, Series E/Chung KuoTien Chi Kung Chieng Hsueh K'an
Volume5
Issue number2
StatePublished - 1 May 1998

Fingerprint

Dive into the research topics of 'Training time of grid Gaussian networks increases at power order of input dimension'. Together they form a unique fingerprint.

Cite this