You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
If You Use a Screen ReaderThis content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
The Relative Importance of Bias and Variability in the Estimation of the Variance of a Statistic
Jeffrey S. Simonoff
Journal of the Royal Statistical Society. Series D (The Statistician)
Vol. 42, No. 1 (1993), pp. 3-7
Stable URL: http://www.jstor.org/stable/2348105
Page Count: 5
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Preview not available
The concept of mean squared error, while useful in the comparison of location-type estimators, can be misleading for variance estimators, since it does not address the relative importance of bias and variability, and the differing effects of negative bias and positive bias, on test size and confidence interval coverage. A simple model is presented here to quantify these effects. It is shown that bias (particularly negative bias) can be a severe problem in this regard, and a less (negatively) biased, but more variable, variance estimator would be preferred.
Journal of the Royal Statistical Society. Series D (The Statistician) © 1993 Royal Statistical Society