Training time of grid Gaussian networks increases at power order of input dimension

M. H. Lin*, Fu-Chuang Chen

*此作品的通信作者

研究成果: Article同行評審

摘要

We study the problem of training Gaussian grid radial basis function networks to approximate nonlinear mapping over an n-dimensional cube. It is shown that the training process converges under a gradient learning rule. A practical method for selecting the learning rate in the gradient rule is proposed. Then formal analysis is provided to demonstrate that under the gradient rule the training time (in terms of iteration) needed for achieving certain accuracy would increase at a power order of the dimension, even when suitable parallel computing hardware is available. Computer simulations are given to verify this point.

指紋

深入研究「Training time of grid Gaussian networks increases at power order of input dimension」主題。共同形成了獨特的指紋。

引用此