Mixture density network training by computation in parameter space

David J. Evans

Research output: Preprint or Working paperTechnical report


Training Mixture Density Network (MDN) configurations within the NETLAB framework takes time due to the nature of the computation of the error function and the gradient of the error function. By optimising the computation of these functions, so that gradient information is computed in parameter space, training time is decreased by at least a factor of sixty for the example given. Decreased training time increases the spectrum of problems to which MDNs can be practically applied making the MDN framework an attractive method to the applied problem solver.
Original languageEnglish
Place of PublicationBirmingham
PublisherAston University
Number of pages17
ISBN (Print)NCRG/98/016
Publication statusPublished - 1998


  • Training Mixture Density Network
  • error function
  • gradient information
  • parameter space
  • applied problem solver

Cite this