Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.

On Maximizing Missing Information About a Hypothesis

D. M. Eaves
Journal of the Royal Statistical Society. Series B (Methodological)
Vol. 47, No. 2 (1985), pp. 263-266
Published by: Wiley for the Royal Statistical Society
Stable URL: http://www.jstor.org/stable/2345568
Page Count: 4
  • Read Online (Free)
  • Download ($29.00)
  • Subscribe ($19.50)
  • Cite this Item
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
On Maximizing Missing Information About a Hypothesis
Preview not available

Abstract

The Bayesian test of a normal mean H0:μ = μ0 is revisited. Several interpretations of prior indifference are contrasted; each employs a prior probability for H0 which in some sense maximizes the information-negative entropy-missing from the prior.

Page Thumbnails

  • Thumbnail: Page 
[263]
    [263]
  • Thumbnail: Page 
264
    264
  • Thumbnail: Page 
265
    265
  • Thumbnail: Page 
266
    266