We study the problem of training Gaussian grid radial basis function networks to approximate nonlinear mapping over an n-dimensional cube. It is shown that the training process converges under a gradient learning rule. A practical method for selecting the learning rate in the gradient rule is proposed. Then formal analysis is provided to demonstrate that under the gradient rule the training time (in terms of iteration) needed for achieving certain accuracy would increase at a power order of the dimension, even when suitable parallel computing hardware is available. Computer simulations are given to verify this point.
|頁（從 - 到）||113-124|
|期刊||Journal of the Chinese Institute of Electrical Engineering, Transactions of the Chinese Institute of Engineers, Series E/Chung KuoTien Chi Kung Chieng Hsueh K'an|
|出版狀態||Published - 1 5月 1998|