You are not currently logged in.
Access JSTOR through your library or other institution:
Improvements on Cross-Validation: The .632+ Bootstrap Method
Bradley Efron and Robert Tibshirani
Journal of the American Statistical Association
Vol. 92, No. 438 (Jun., 1997), pp. 548-560
Stable URL: http://www.jstor.org/stable/2965703
Page Count: 13
You can always find the topics here!Topics: Error rates, Point estimators, Estimators, Estimation bias, Simulation training, Standard error, Bootstrap resampling, Statistical estimation, Simulations, Statistical variance
Were these topics helpful?See somethings inaccurate? Let us know!
Select the topics that are inaccurate.
Preview not available
A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? This is an important question both for comparing models and for assessing a final selected model. The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased but can be highly variable. Here we discuss bootstrap estimates of prediction error, which can be thought of as smoothed versions of cross-validation. We show that a particular bootstrap method, the .632+ rule, substantially outperforms cross-validation in a catalog of 24 simulation experiments. Besides providing point estimates, we also consider estimating the variability of an error rate estimate. All of the results here are nonparametric and apply to any possible prediction rule; however, we study only classification problems with 0-1 loss in detail. Our simulations include "smooth" prediction rules like Fisher's linear discriminant function and unsmooth ones like nearest neighbors.
Journal of the American Statistical Association © 1997 American Statistical Association