You are not currently logged in.
Access JSTOR through your library or other institution:
If You Use a Screen ReaderThis content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
On the History of Maximum Likelihood in Relation to Inverse Probability and Least Squares
Vol. 14, No. 2 (May, 1999), pp. 214-222
Published by: Institute of Mathematical Statistics
Stable URL: http://www.jstor.org/stable/2676741
Page Count: 9
You can always find the topics here!Topics: Maximum likelihood estimation, Least squares, Estimation methods, Maximum likelihood estimators, Unbiased estimators, Error rates, Statistical variance, Frequentism, Parametric models, Sampling distributions
Were these topics helpful?See something inaccurate? Let us know!
Select the topics that are inaccurate.
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Preview not available
It is shown that the method of maximum likelihood occurs in rudimentary forms before Fisher [Messenger of Mathematics 41 (1912) 155-160], but not under this name. Some of the estimates called "most probable" would today have been called "most likely." Gauss [Z. Astronom. Verwandte Wiss. 1 (1816) 185-196] used invariance under parameter transformation when deriving his estimate of the standard deviation in the normal case. Hagen [Grundzuge der Wahrschein-lichkeits-Rechnung, Dummler, Berlin (1837)] used the maximum likelihood argument for deriving the frequentist version of the method of least squares for the linear normal model. Edgeworth [J. Roy. Statist. Soc. 72 (1909) 81-90] proved the asymptotic normality and optimality of the maximum likelihood estimate for a restricted class of distributions. Fisher had two aversions: noninvariance and unbiasedness. Replacing the posterior mode by the maximum likelihood estimate he achieved invariance, and using a two-stage method of maximum likelihood he avoided appealing to unbiasedness for the linear normal model.
Statistical Science © 1999 Institute of Mathematical Statistics