Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

Adaptive Prediction of Stationary Time Series

H. T. Davis and L. H. Koopmans
Sankhyā: The Indian Journal of Statistics, Series A (1961-2002)
Vol. 35, No. 1, Dedicated to the Memory of P. C. Mahalanobis (Mar., 1973), pp. 5-22
Published by: Springer on behalf of the Indian Statistical Institute
Stable URL: http://www.jstor.org/stable/25049844
Page Count: 18
  • Download ($43.95)
  • Cite this Item
Adaptive Prediction of Stationary Time Series
Preview not available

Abstract

The paper presents two schemes for adaptively estimating the parameters of a finite memory linear predictor for a weakly stationary stochastic process. Letting $\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}}$ be the vector of the n-th estimators of the parameters, $\varphi _{n}(k)={\textstyle\frac{1}{n}}\sum_{t=1}^{n-k}u_{t}u_{t+k}$ the positive definite estimator of the covariance function for the original process $\{u_{t}\},\boldsymbol{\varphi}_{\mathbf{\mathit{n}}}=[\varphi _{n}(k)],k=1,\ldots ,p\ {\rm a}\ p\times 1$ vector, and $\boldsymbol{\Phi}_{\mathbf{\mathit{n}}}=[\varphi _{n}(i-j)],i,j=1,\ldots ,p$ a p × p matrix, L. A. Gardner (Transactions of the Third Prague Conference on Information Theory, Statistical Decision Functions and Random Processes, 1964) presents an adaptive estimator of the form $\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}+1}=\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}}+\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}(\boldsymbol{\varphi}_{\mathbf{\mathit{n}}}-\boldsymbol{\Phi}_{\mathbf{\mathit{n}}}\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}})$ where the p × p matrix $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}=(a/n)\mathbf{\mathit{I}}$ for any a > 0. The class of matrices $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ is first generalized to a much larger class which includes, among others, certain stochastic matrices as well as certain estimators suggested through an analogy to Kalman Filtering. The second scheme is entirely new and is based on the conjugate direction methods for solving systems of linear equations. This scheme also extends the class of matrices $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ to a class whose elements are no longer of order 1/n. Almost sure convergence of both schemes to the parameters of the best finite memory predictor is established and the selection of the best $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ from the viewpoint of improving the convergence rates of the schemes is discussed.

Page Thumbnails

  • Thumbnail: Page 
5
    5
  • Thumbnail: Page 
6
    6
  • Thumbnail: Page 
7
    7
  • Thumbnail: Page 
8
    8
  • Thumbnail: Page 
9
    9
  • Thumbnail: Page 
10
    10
  • Thumbnail: Page 
11
    11
  • Thumbnail: Page 
12
    12
  • Thumbnail: Page 
13
    13
  • Thumbnail: Page 
14
    14
  • Thumbnail: Page 
15
    15
  • Thumbnail: Page 
16
    16
  • Thumbnail: Page 
17
    17
  • Thumbnail: Page 
18
    18
  • Thumbnail: Page 
19
    19
  • Thumbnail: Page 
20
    20
  • Thumbnail: Page 
21
    21
  • Thumbnail: Page 
22
    22