You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
Estimating Properties of Autoregressive Forecasts
Robert A. Stine
Journal of the American Statistical Association
Vol. 82, No. 400 (Dec., 1987), pp. 1072-1078
Stable URL: http://www.jstor.org/stable/2289383
Page Count: 7
Preview not available
Forecasting requires estimates of the error of prediction; however, such estimates for autoregressive forecasts depend nonlinearly on unknown parameters and distributions. Substitution estimators of mean squared error (MSE) possess bias that varies with the underlying model, and Gaussian-based prediction intervals fail if the data are not normally distributed. This article proposes methods that avoid these problems. A second-order Taylor expansion produces an estimator of MSE that is unbiased and leads to accurate prediction intervals for Gaussian data. Bootstrapping also suggests an estimator of MSE, but it is approximately the problematic substitution estimator. Bootstrapping also yields prediction intervals, however, whose coverages are invariant of the sampling distribution and asymptotically approach the nominal content. Parameter estimation increases the error in autoregressive forecasts. This additional error inflates one-step prediction mean squared error (PMSE) by a factor of 1 + p/T, where p is the number of parameters and T is the series length (Bloomfield 1972; Box and Jenkins 1976; Davisson 1965). The increase at greater extrapolation involves parameters of the process (Yamamoto 1976). Simple substitution estimators of squared error possess bias that can dominate attempts to estimate the inflation. The proposed bias-corrected estimator alleviates this problem. For example, in simulations of short series (T = 24) from a first-order model with a coefficient of .8 (.4), the substitution estimator at forecast leads 2 and 3 underestimated the true PMSE by 4.7% and 6.4% (-.4% and -2.2%). The corrected estimator erred by less than .5%. Prediction intervals based on the bootstrap are preferable unless the sampling distribution is known. The bias-corrected estimator of PMSE leads to a very accurate prediction interval for Gaussian data, but its coverage depends on the normality assumption. A bootstrap interval asymptotically approaches the desired coverage but is less efficient. For non-Gaussian data, only the bootstrap intervals necessarily tend toward the correct coverage. Although numerical results show bootstrap intervals tend to lack the nominal coverage by several percentages, this deficiency is consistent across sampling distributions and rapidly decays with increasing series length.
Journal of the American Statistical Association © 1987 American Statistical Association