## Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

## If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.

# A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations

Herman Chernoff
The Annals of Mathematical Statistics
Vol. 23, No. 4 (Dec., 1952), pp. 493-507
Stable URL: http://www.jstor.org/stable/2236576
Page Count: 15
Preview not available

## Abstract

In many cases an optimum or computationally convenient test of a simple hypothesis H0 against a simple alternative H1 may be given in the following form. Reject H0 if Sn = ∑n j=1 Xj ≤ k, where X1, X2, ⋯, Xn are n independent observations of a chance variable X whose distribution depends on the true hypothesis and where k is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index ρ. If ρ1 and ρ2 are the indices corresponding to two alternative tests e = log ρ1/log ρ2 measures the relative efficiency of these tests in the following sense. For large samples, a sample of size n with the first test will give about the same probabilities of error as a sample of size en with the second test. To obtain the above result, use is made of the fact that P(Sn ≤ na) behaves roughly like mn where m is the minimum value assumed by the moment generating function of X - a. It is shown that if H0 and H1 specify probability distributions of X which are very close to each other, one may approximate ρ by assuming that X is normally distributed.

• 493
• 494
• 495
• 496
• 497
• 498
• 499
• 500
• 501
• 502
• 503
• 504
• 505
• 506
• 507