Access
You are not currently logged in.
Access JSTOR through your library or other institution:
If You Use a Screen Reader
This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.Journal Article
Transformations Related to the Angular and the Square Root
Murray F. Freeman and John W. Tukey
The Annals of Mathematical Statistics
Vol. 21, No. 4 (Dec., 1950), pp. 607611
Published by: Institute of Mathematical Statistics
Stable URL: http://www.jstor.org/stable/2236611
Page Count: 5
You can always find the topics here!
Topics: Approximation, Binomials, Statistical variance, Range errors, Analysis of variance
Were these topics helpful?
See somethings inaccurate? Let us know!
Select the topics that are inaccurate.
 Item Type
 Article
 Thumbnails
 References
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Abstract
The use of transformations to stabilize the variance of binomial or Poisson data is familiar(Anscombe [1], Bartlett [2, 3], Curtiss [4], Eisenhart [5]). The comparison of transformed binomial or Poisson data with percentage points of the normal distribution to make approximate significance tests or to set approximate confidence intervals is less familiar. Mosteller and Tukey [6] have recently made a graphical application of a transformation related to the squareroot transformation for such purposes, where the use of "binomial probability paper" avoids all computation. We report here on an empirical study of a number of approximations, some intended for significance and confidence work and others for variance stabilization. For significance testing and the setting of confidence limits, we should like to use the normal deviate K exceeded with the same probability as the number of successes x from n in a binomial distribution with expectation np, which is defined by $\frac{1}{2\pi} \int^K_{\infty} e^{\frac{1}{2}t^2} dt = \operatorname{Prob} \{x \leq k mid \operatorname{binomial}, n, p\}.$ The most useful approximations to K that we can propose here are N (very simple), N+ (accurate near the usual percentage points), and N** (quite accurate generally), where $N = 2 (\sqrt{(k + 1)q}  \sqrt{(n  k)p)}.$ (This is the approximation used with binomial probability paper.) $N^+ = N + \frac{N + 2p  1}{12\sqrt{E}},\quad E = \text{lesser of} np \text{and} nq, N^\ast = N + \frac{(N  2)(N + 2)}{12} \big(\frac{1}{\sqrt{np + 1}}  \frac{1}{\sqrt{nq + 1}}\big), N^{\ast\ast} = N^\ast + \frac{N^\ast + 2p  1}{12 \sqrt{E}}\cdot\quad E = \text{lesser of} np \text{and} nq.$ For variance stabilization, the averaged angular transformation $\sin^{1}\sqrt{\frac{x}{n + 1}} + \sin^{1} \sqrt{\frac{x + 1}{n+1}}$ has variance within ± 6% of 1/n + 1/2 (angles in radians), 821/n + 1/2 (angles in degrees), for almost all cases where np ≥ 1. In the Poisson case, this simplifies to using $\sqrt{x} + \sqrt{x + 1}$ as having variance 1.
Page Thumbnails

607

608

609

610

611
The Annals of Mathematical Statistics © 1950 Institute of Mathematical Statistics