You are not currently logged in.
Access JSTOR through your library or other institution:
If You Use a Screen ReaderThis content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
The Comparison and Evaluation of Forecasters
Morris H. DeGroot and Stephen E. Fienberg
Journal of the Royal Statistical Society. Series D (The Statistician)
Vol. 32, No. 1/2, Proceedings of the 1982 I.O.S. Annual Conference on Practical Bayesian Statistics (Mar. - Jun., 1983), pp. 12-22
Stable URL: http://www.jstor.org/stable/2987588
Page Count: 11
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Preview not available
In this paper we present methods for comparing and evaluating forecasters whose predictions are presented as their subjective probability distributions of various random variables that will be observed in the future, e.g. weather forecasters who each day must specify their own probabilities that it will rain in a particular location. We begin by reviewing the concepts of calibration and refinement, and describing the relationship between this notion of refinement and the notion of sufficiency in the comparison of statistical experiments. We also consider the question of interrelationships among forecasters and discuss methods by which an observer should combine the predictions from two or more different forecasters. Then we turn our attention to the concept of a proper scoring rule for evaluating forecasters, relating it to the concepts of calibration and refinement. Finally, we discuss conditions under which one forecaster can exploit the predictions of another forecaster to obtain a better score.
Journal of the Royal Statistical Society. Series D (The Statistician) © 1983 Royal Statistical Society