You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
Prediction Intervals for Neural Networks via Nonlinear Regression
Richard D. de Veaux, Jennifer Schumi, Jason Schweinsberg and Lyle H. Ungar
Vol. 40, No. 4 (Nov., 1998), pp. 273-282
Published by: Taylor & Francis, Ltd. on behalf of American Statistical Association and American Society for Quality
Stable URL: http://www.jstor.org/stable/1270528
Page Count: 10
Preview not available
Standard methods for computing prediction intervals in nonlinear regression can be effectively applied to neural networks when the number of training points is large. Simulations show, however, that these methods can generate unreliable prediction intervals on smaller datasets when the network is trained to convergence. Stopping the training algorithm prior to convergence, to avoid overfitting, reduces the effective number of parameters but can lead to prediction intervals that are too wide. We present an alternative approach to estimating prediction intervals using weight decay to fit the network and show via a simulation study that this method may be effective in overcoming some of the shortcomings of the other approaches.
Technometrics © 1998 American Statistical Association