Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.

A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

Herman Chernoff
The Annals of Mathematical Statistics
Vol. 23, No. 4 (Dec., 1952), pp. 493-507
Stable URL: http://www.jstor.org/stable/2236576
Page Count: 15
  • Read Online (Free)
  • Download ($19.00)
  • Subscribe ($19.50)
  • Cite this Item
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations
Preview not available

Abstract

In many cases an optimum or computationally convenient test of a simple hypothesis H0 against a simple alternative H1 may be given in the following form. Reject H0 if Sn = ∑n j=1 Xj ≤ k, where X1, X2, ⋯, Xn are n independent observations of a chance variable X whose distribution depends on the true hypothesis and where k is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index ρ. If ρ1 and ρ2 are the indices corresponding to two alternative tests e = log ρ1/log ρ2 measures the relative efficiency of these tests in the following sense. For large samples, a sample of size n with the first test will give about the same probabilities of error as a sample of size en with the second test. To obtain the above result, use is made of the fact that P(Sn ≤ na) behaves roughly like mn where m is the minimum value assumed by the moment generating function of X - a. It is shown that if H0 and H1 specify probability distributions of X which are very close to each other, one may approximate ρ by assuming that X is normally distributed.

Page Thumbnails

  • Thumbnail: Page 
493
    493
  • Thumbnail: Page 
494
    494
  • Thumbnail: Page 
495
    495
  • Thumbnail: Page 
496
    496
  • Thumbnail: Page 
497
    497
  • Thumbnail: Page 
498
    498
  • Thumbnail: Page 
499
    499
  • Thumbnail: Page 
500
    500
  • Thumbnail: Page 
501
    501
  • Thumbnail: Page 
502
    502
  • Thumbnail: Page 
503
    503
  • Thumbnail: Page 
504
    504
  • Thumbnail: Page 
505
    505
  • Thumbnail: Page 
506
    506
  • Thumbnail: Page 
507
    507