Abstract
An analytic investigation of the average case learning and generalization properties of Radial Basis Function Networks (RBFs) is presented, utilising on-line gradient descent as the learning rule. The analytic method employed allows both the calculation of generalization error and the examination of the internal dynamics of the network. The generalization error and internal dynamics are then used to examine the role of the learning rate and the specialization of the hidden units, which gives insight into decreasing the time required for training. The realizable and over-realizable cases are studied in detail; the phase of learning in which the hidden units are unspecialized (symmetric phase) and the phase in which asymptotic convergence occurs are analyzed, and their typical properties found. Finally, simulations are performed which strongly confirm the analytic results.
Original language | English |
---|---|
Pages (from-to) | 1601-1622 |
Number of pages | 22 |
Journal | Neural Computation |
Volume | 9 |
Issue number | 7 |
Publication status | Published - 1 Oct 1997 |
Bibliographical note
Copyright of the Massachusetts Institute of Technology Press (MIT Press)Keywords
- radial basis function networks
- error
- network
- internal dynamics
- learning rate
- hidden units