Access

You are not currently logged in.

Access JSTOR through your library or other institution:

login

Log in through your institution.

If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.

The Comparison and Evaluation of Forecasters

Morris H. DeGroot and Stephen E. Fienberg
Journal of the Royal Statistical Society. Series D (The Statistician)
Vol. 32, No. 1/2, Proceedings of the 1982 I.O.S. Annual Conference on Practical Bayesian Statistics (Mar. - Jun., 1983), pp. 12-22
Published by: Wiley for the Royal Statistical Society
DOI: 10.2307/2987588
Stable URL: http://www.jstor.org/stable/2987588
Page Count: 11
  • Read Online (Free)
  • Download ($29.00)
  • Subscribe ($19.50)
  • Cite this Item
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
The Comparison and Evaluation of Forecasters
Preview not available

Abstract

In this paper we present methods for comparing and evaluating forecasters whose predictions are presented as their subjective probability distributions of various random variables that will be observed in the future, e.g. weather forecasters who each day must specify their own probabilities that it will rain in a particular location. We begin by reviewing the concepts of calibration and refinement, and describing the relationship between this notion of refinement and the notion of sufficiency in the comparison of statistical experiments. We also consider the question of interrelationships among forecasters and discuss methods by which an observer should combine the predictions from two or more different forecasters. Then we turn our attention to the concept of a proper scoring rule for evaluating forecasters, relating it to the concepts of calibration and refinement. Finally, we discuss conditions under which one forecaster can exploit the predictions of another forecaster to obtain a better score.

Page Thumbnails

  • Thumbnail: Page 
12
    12
  • Thumbnail: Page 
13
    13
  • Thumbnail: Page 
14
    14
  • Thumbnail: Page 
15
    15
  • Thumbnail: Page 
16
    16
  • Thumbnail: Page 
17
    17
  • Thumbnail: Page 
18
    18
  • Thumbnail: Page 
19
    19
  • Thumbnail: Page 
20
    20
  • Thumbnail: Page 
21
    21
  • Thumbnail: Page 
22
    22