Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If you need an accessible version of this item please contact JSTOR User Support

Bayes Factors

Robert E. Kass and Adrian E. Raftery
Journal of the American Statistical Association
Vol. 90, No. 430 (Jun., 1995), pp. 773-795
DOI: 10.2307/2291091
Stable URL: http://www.jstor.org/stable/2291091
Page Count: 23
  • Download ($14.00)
  • Cite this Item
If you need an accessible version of this item please contact JSTOR User Support
Bayes Factors
Preview not available

Abstract

In a 1935 paper and in his book Theory of probability, Jeffresy developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpies was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P-values, less attention has been given to the Bayes as a practical tool of applied statistics. In this article we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology, and psychology. We emphasize the following points: From Jeffrey's Bayesian viewpoint, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory. Bayes factors offer a way of evaluating evidence in favor of a null hypothesis. Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis. Bayes factors are very general and do not require alternative models to be nested. Several techniques are available for computing Bayes factors, including asymptotic approximations that are easy to compute using the output from standard packages that maximize likelihoods. In "non-Bayesian significance tests. The Schwarz criterion (or BIC) gives a rough approximation to the logarithm of the Bayes factor, which is easy to use and does not require evaluation of prior distributions. When one is interested in estimation or prediction, Bayes factors may be converted to weights to be attached to various models so that a composite estimate or prediction may be obtained that takes account of structural or model uncertainty. Algorithms have been proposed that allow model uncertainty to be taken into account when the class of models initially considered is very large. Bayes factors are useful for guiding an evolutionary model-building process. It is important, and feasible, to assess the sensitivity of conclusions to the prior distributions used.

Page Thumbnails

  • Thumbnail: Page 
773
    773
  • Thumbnail: Page 
774
    774
  • Thumbnail: Page 
775
    775
  • Thumbnail: Page 
776
    776
  • Thumbnail: Page 
777
    777
  • Thumbnail: Page 
778
    778
  • Thumbnail: Page 
779
    779
  • Thumbnail: Page 
780
    780
  • Thumbnail: Page 
781
    781
  • Thumbnail: Page 
782
    782
  • Thumbnail: Page 
783
    783
  • Thumbnail: Page 
784
    784
  • Thumbnail: Page 
785
    785
  • Thumbnail: Page 
786
    786
  • Thumbnail: Page 
787
    787
  • Thumbnail: Page 
788
    788
  • Thumbnail: Page 
789
    789
  • Thumbnail: Page 
790
    790
  • Thumbnail: Page 
791
    791
  • Thumbnail: Page 
792
    792
  • Thumbnail: Page 
793
    793
  • Thumbnail: Page 
794
    794
  • Thumbnail: Page 
795
    795