Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.

The Relative Importance of Bias and Variability in the Estimation of the Variance of a Statistic

Jeffrey S. Simonoff
Journal of the Royal Statistical Society. Series D (The Statistician)
Vol. 42, No. 1 (1993), pp. 3-7
Published by: Wiley for the Royal Statistical Society
DOI: 10.2307/2348105
Stable URL: http://www.jstor.org/stable/2348105
Page Count: 5
  • Read Online (Free)
  • Download ($29.00)
  • Subscribe ($19.50)
  • Cite this Item
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
The Relative Importance of Bias and Variability in the Estimation of the Variance of a Statistic
Preview not available

Abstract

The concept of mean squared error, while useful in the comparison of location-type estimators, can be misleading for variance estimators, since it does not address the relative importance of bias and variability, and the differing effects of negative bias and positive bias, on test size and confidence interval coverage. A simple model is presented here to quantify these effects. It is shown that bias (particularly negative bias) can be a severe problem in this regard, and a less (negatively) biased, but more variable, variance estimator would be preferred.

Page Thumbnails

  • Thumbnail: Page 
3
    3
  • Thumbnail: Page 
4
    4
  • Thumbnail: Page 
5
    5
  • Thumbnail: Page 
6
    6
  • Thumbnail: Page 
7
    7