Abstract
Neural networks are statistical models and learning rules are estimators. In this paper a theory for measuring generalisation is developed by combining Bayesian decision theory with information geometry. The performance of an estimator is measured by the information divergence between the true distribution and the estimate, averaged over the Bayesian posterior. This unifies the majority of error measures currently in use. The optimal estimators also reveal some intricate interrelationships among information geometry, Banach spaces and sufficient statistics.
Original language | English |
---|---|
Journal | Annals of Mathematics And artificial Intelligence |
Publication status | Published - 2 Jul 1995 |
Event | Proc. Mathematics of Neural Networks and Applications - Duration: 2 Jul 1995 → 2 Jul 1995 |
Bibliographical note
Copyright of SpringerLinkKeywords
- neural networks
- Bayesian
- information geometry
- estimator
- error
- Banach