Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If you need an accessible version of this item please contact JSTOR User Support

Approximate Entropy for Testing Randomness

Andrew L. Rukhin
Journal of Applied Probability
Vol. 37, No. 1 (Mar., 2000), pp. 88-100
Stable URL: http://www.jstor.org/stable/3215661
Page Count: 13
  • Read Online (Free)
  • Subscribe ($19.50)
  • Cite this Item
If you need an accessible version of this item please contact JSTOR User Support
Approximate Entropy for Testing Randomness
Preview not available

Abstract

This paper arose from interest in assessing the quality of random number generators. The problem of testing randomness of a string of binary bits produced by such a generator gained importance with the wide use of public key cryptography and the need for secure encryption algorithms. All such algorithms are based on a generator of (pseudo) random numbers; the testing of such generators for randomness became crucial for the communications industry where digital signatures and key management are vital for information processing. The concept of approximate entropy has been introduced in a series of papers by S. Pincus and co-authors. The corresponding statistic is designed to measure the degree of randomness of observed sequences. It is based on incremental contrasts of empirical entropies based on the frequencies of different patterns in the sequence. Sequences with large approximate entropy must have substantial fluctuation or irregularity. Alternatively, small values of this characteristic imply strong regularity, or lack of randomness, in a sequence. Pincus and Kalman (1997) evaluated approximate entropies for binary and decimal expansions of e, $\pi, \sqrt{2}$ and $\sqrt{3}$ with the surprising conclusion that the expansion of $\sqrt{3}$ demonstrated much less irregularity than that of π. Tractable small sample distributions are hardly available, and testing randomness is based, as a rule, on fairly long strings. Therefore, to have rigorous statistical tests of randomness based on this approximate entropy statistic, one needs the limiting distribution of this characteristic under the randomness assumption. Until now this distribution remained unknown and was thought to be difficult to obtain. To derive the limiting distribution of approximate entropy we modify its definition. It is shown that the approximate entropy as well as its modified version converges in distribution to a χ2-random variable. The P-values of approximate entropy test statistics for binary expansions of e, π and $\sqrt{3}$ are plotted. Although some of these values for $\sqrt{3}$ digits are small, they do not provide enough statistical significance against the randomness hypothesis.

Page Thumbnails

  • Thumbnail: Page 
88
    88
  • Thumbnail: Page 
89
    89
  • Thumbnail: Page 
90
    90
  • Thumbnail: Page 
91
    91
  • Thumbnail: Page 
92
    92
  • Thumbnail: Page 
93
    93
  • Thumbnail: Page 
94
    94
  • Thumbnail: Page 
95
    95
  • Thumbnail: Page 
96
    96
  • Thumbnail: Page 
97
    97
  • Thumbnail: Page 
98
    98
  • Thumbnail: Page 
99
    99
  • Thumbnail: Page 
100
    100