Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If you need an accessible version of this item please contact JSTOR User Support

Contributions to the Theory of Sequential Analysis. I

M. A. Girshick
The Annals of Mathematical Statistics
Vol. 17, No. 2 (Jun., 1946), pp. 123-143
Stable URL: http://www.jstor.org/stable/2236034
Page Count: 21
  • Read Online (Free)
  • Download ($19.00)
  • Cite this Item
If you need an accessible version of this item please contact JSTOR User Support
Contributions to the Theory of Sequential Analysis. I
Preview not available

Abstract

Given two populations π1 and π2 each characterized by a distribution density f(x, θ) which is assumed to be known, except for the value of the parameter θ. It is desired to test the composite hypothesis $\theta_1 < \theta_2$ against the alternative hypothesis $\theta_1 > \theta_2$ where θi is the value of the parameter in the distribution density of πi, (i = 1, 2). The criterion proposed for testing this hypothesis is based on the sequential probability ratio and consists of the following: Choose two positive constants a and b and two values of θ, say θ0 1 and θ0 2. Take pairs of observations x1α from π1 and x2α from π2, (α = 1,2, ...), in sequence and compute Zj = ∑j α = 1 zα where $z_\alpha = \log \big\lbrack \frac{f(x_{2\alpha}, \theta^0_1)f(x_{1\alpha}, \theta^0_2)} {f(x_{2\alpha}, \theta^0_2)f(x_{1\alpha}, \theta^0_1)}\big\rbrack.$ The hypothesis tested is accepted or rejected depending on whether Zn ≥ a or Zn ≤ - b where n is the smallest integer j for which either one of these relationships is satisfied. The boundaries a and b are partly given in terms of the desired risks of making an erroneous decision. The values θ0 1 and θ0 2 define the magnitude of the difference between the values of θ in π1 and in π2 which is considered worth detecting. It is shown that the power of this test is constant on a curve h(θ1, θ2) = constant. If E(log f(x, θ0 2)/f(x, θ0 1)) is a monotonic function of θ, then the test is unbiased in the sense that all points (θ1, θ2) which lie on the curve h(θ1, θ2) = constant are such that either every $\theta_1 < \theta_2$ or every $\theta_1 > \theta_2$. For a large class of known distributions the quantity h is shown to be an appropriate measure of the difference between θ1 and θ2 and the test procedure for this class of distributions is simple and intuitively sensible. For the case of the binomial, the exact power of this test as well as the distribution of n is given.

Page Thumbnails

  • Thumbnail: Page 
123
    123
  • Thumbnail: Page 
124
    124
  • Thumbnail: Page 
125
    125
  • Thumbnail: Page 
126
    126
  • Thumbnail: Page 
127
    127
  • Thumbnail: Page 
128
    128
  • Thumbnail: Page 
129
    129
  • Thumbnail: Page 
130
    130
  • Thumbnail: Page 
131
    131
  • Thumbnail: Page 
132
    132
  • Thumbnail: Page 
133
    133
  • Thumbnail: Page 
134
    134
  • Thumbnail: Page 
135
    135
  • Thumbnail: Page 
136
    136
  • Thumbnail: Page 
137
    137
  • Thumbnail: Page 
138
    138
  • Thumbnail: Page 
139
    139
  • Thumbnail: Page 
140
    140
  • Thumbnail: Page 
141
    141
  • Thumbnail: Page 
142
    142
  • Thumbnail: Page 
143
    143