Access
You are not currently logged in.
Access JSTOR through your library or other institution:
Journal Article
Adaptive Prediction of Stationary Time Series
H. T. Davis and L. H. Koopmans
Sankhyā: The Indian Journal of Statistics, Series A (19612002)
Vol. 35, No. 1, Dedicated to the Memory of P. C. Mahalanobis (Mar., 1973), pp. 522
Published by: Indian Statistical Institute
Stable URL: http://www.jstor.org/stable/25049844
Page Count: 18
You can always find the topics here!
Topics: Matrices, Estimators, Eigenvalues, Integers, Time series forecasting, Estimation methods, Statistical theories, Linear systems, Stochastic processes, Statistical estimation
Were these topics helpful?
See somethings inaccurate? Let us know!
Select the topics that are inaccurate.
 Item Type
 Article
 Thumbnails
 References
Abstract
The paper presents two schemes for adaptively estimating the parameters of a finite memory linear predictor for a weakly stationary stochastic process. Letting $\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}}$ be the vector of the nth estimators of the parameters, $\varphi _{n}(k)={\textstyle\frac{1}{n}}\sum_{t=1}^{nk}u_{t}u_{t+k}$ the positive definite estimator of the covariance function for the original process $\{u_{t}\},\boldsymbol{\varphi}_{\mathbf{\mathit{n}}}=[\varphi _{n}(k)],k=1,\ldots ,p\ {\rm a}\ p\times 1$ vector, and $\boldsymbol{\Phi}_{\mathbf{\mathit{n}}}=[\varphi _{n}(ij)],i,j=1,\ldots ,p$ a p × p matrix, L. A. Gardner (Transactions of the Third Prague Conference on Information Theory, Statistical Decision Functions and Random Processes, 1964) presents an adaptive estimator of the form $\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}+1}=\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}}+\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}(\boldsymbol{\varphi}_{\mathbf{\mathit{n}}}\boldsymbol{\Phi}_{\mathbf{\mathit{n}}}\mathbf{\mathit{x}}_{\mathbf{\mathit{n}}})$ where the p × p matrix $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}=(a/n)\mathbf{\mathit{I}}$ for any a > 0. The class of matrices $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ is first generalized to a much larger class which includes, among others, certain stochastic matrices as well as certain estimators suggested through an analogy to Kalman Filtering. The second scheme is entirely new and is based on the conjugate direction methods for solving systems of linear equations. This scheme also extends the class of matrices $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ to a class whose elements are no longer of order 1/n. Almost sure convergence of both schemes to the parameters of the best finite memory predictor is established and the selection of the best $\mathbf{\mathit{A}}_{\mathbf{\mathit{n}}}$ from the viewpoint of improving the convergence rates of the schemes is discussed.
Page Thumbnails

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22
Sankhyā: The Indian Journal of Statistics, Series A (19612002) © 1973 Indian Statistical Institute