If you need an accessible version of this item please contact JSTOR User Support

# Adaptive Prediction of Stationary Time Series

H. T. Davis and L. H. Koopmans
Sankhyā: The Indian Journal of Statistics, Series A (1961-2002)
Vol. 35, No. 1, Dedicated to the Memory of P. C. Mahalanobis (Mar., 1973), pp. 5-22
Published by: Springer on behalf of the Indian Statistical Institute
Stable URL: http://www.jstor.org/stable/25049844
Page Count: 18

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

If you need an accessible version of this item please contact JSTOR User Support
Preview not available

## Abstract

The paper presents two schemes for adaptively estimating the parameters of a finite memory linear predictor for a weakly stationary stochastic process. Letting $\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}}$ be the vector of the n-th estimators of the parameters, $\varphi _{n}(k)={\textstyle\frac{1}{n}}\sum_{t=1}^{n-k}u_{t}u_{t+k}$ the positive definite estimator of the covariance function for the original process $\{u_{t}\},\boldsymbol{\varphi}_{\mathbf{\mathit{n}}}=[\varphi _{n}(k)],k=1,\ldots ,p\ {\rm a}\ p\times 1$ vector, and $\boldsymbol{\Phi}_{\mathbf{\mathit{n}}}=[\varphi _{n}(i-j)],i,j=1,\ldots ,p$ a p × p matrix, L. A. Gardner (Transactions of the Third Prague Conference on Information Theory, Statistical Decision Functions and Random Processes, 1964) presents an adaptive estimator of the form $\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}+1}=\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}}+\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}(\boldsymbol{\varphi}_{\mathbf{\mathit{n}}}-\boldsymbol{\Phi}_{\mathbf{\mathit{n}}}\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}})$ where the p × p matrix $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}=(a/n)\mathbf{\mathit{I}}$ for any a > 0. The class of matrices $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ is first generalized to a much larger class which includes, among others, certain stochastic matrices as well as certain estimators suggested through an analogy to Kalman Filtering. The second scheme is entirely new and is based on the conjugate direction methods for solving systems of linear equations. This scheme also extends the class of matrices $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ to a class whose elements are no longer of order 1/n. Almost sure convergence of both schemes to the parameters of the best finite memory predictor is established and the selection of the best $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ from the viewpoint of improving the convergence rates of the schemes is discussed.

• 5
• 6
• 7
• 8
• 9
• 10
• 11
• 12
• 13
• 14
• 15
• 16
• 17
• 18
• 19
• 20
• 21
• 22