## Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

## If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.

# Assessing the Accuracy of the Maximum Likelihood Estimator: Observed Versus Expected Fisher Information

Bradley Efron and David V. Hinkley
Biometrika
Vol. 65, No. 3 (Dec., 1978), pp. 457-482
DOI: 10.2307/2335893
Stable URL: http://www.jstor.org/stable/2335893
Page Count: 26
Preview not available

## Abstract

This paper concerns normal approximations to the distribution of the maximum likelihood estimator in one-parameter families. The traditional variance approximation is $1/\mathscr{J}_\hat\theta$, where $\hat\theta$ is the maximum likelihood estimator and Jθ is the expected total Fisher information. Many writers, including R. A. Fisher, have argued in favour of the variance estimate 1/I(x), where I(x) is the observed information, i.e. minus the second derivative of the log likelihood function at $\hat\theta$ given data x. We give a frequentist justification for preferring 1/I(x) to $1/\mathscr{J}_\hat\theta$. The former is shown to approximate the conditional variance of $\hat\theta$ given an appropriate ancillary statistic which to a first approximation is I(x). The theory may be seen to flow naturally from Fisher's pioneering papers on likelihood estimation. A large number of examples are used to supplement a small amount of theory. Our evidence indicates preference for the likelihood ratio method of obtaining confidence limits.

• 457
• 458
• 459
• 460
• 461
• 462
• 463
• 464
• 465
• 466
• 467
• 468
• 469
• 470
• 471
• 472
• 473
• 474
• 475
• 476
• 477
• 478
• 479
• 480
• 481
• 482