Abstract
We consider the problem of on-line gradient descent learning for general two-layer neural networks. An analytic solution is presented and used to investigate the role of the learning rate in controlling the evolution and convergence of the learning process.
Original language | English |
---|---|
Pages (from-to) | 302-308 |
Number of pages | 7 |
Journal | Advances in Neural Information Processing Systems |
Volume | 8 |
Publication status | Published - 1996 |
Bibliographical note
Copyright of Massachusetts Institute of Technology Press (MIT Press) http://mitpress.mit.edu/mitpress/copyright/default.aspKeywords
- on-line
- gradient descent learning
- general two-layer neural networks
- learning rate
- learning process.