Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

Prediction Intervals for Neural Networks via Nonlinear Regression

Richard D. de Veaux, Jennifer Schumi, Jason Schweinsberg and Lyle H. Ungar
Technometrics
Vol. 40, No. 4 (Nov., 1998), pp. 273-282
DOI: 10.2307/1270528
Stable URL: http://www.jstor.org/stable/1270528
Page Count: 10
  • Download ($14.00)
  • Cite this Item
Prediction Intervals for Neural Networks via Nonlinear Regression
Preview not available

Abstract

Standard methods for computing prediction intervals in nonlinear regression can be effectively applied to neural networks when the number of training points is large. Simulations show, however, that these methods can generate unreliable prediction intervals on smaller datasets when the network is trained to convergence. Stopping the training algorithm prior to convergence, to avoid overfitting, reduces the effective number of parameters but can lead to prediction intervals that are too wide. We present an alternative approach to estimating prediction intervals using weight decay to fit the network and show via a simulation study that this method may be effective in overcoming some of the shortcomings of the other approaches.

Page Thumbnails

  • Thumbnail: Page 
273
    273
  • Thumbnail: Page 
274
    274
  • Thumbnail: Page 
275
    275
  • Thumbnail: Page 
276
    276
  • Thumbnail: Page 
277
    277
  • Thumbnail: Page 
278
    278
  • Thumbnail: Page 
279
    279
  • Thumbnail: Page 
280
    280
  • Thumbnail: Page 
281
    281
  • Thumbnail: Page 
282
    282