Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.

Convergence Rate of Sieve Estimates

Xiaotong Shen and Wing Hung Wong
The Annals of Statistics
Vol. 22, No. 2 (Jun., 1994), pp. 580-615
Stable URL: http://www.jstor.org/stable/2242281
Page Count: 36
  • Read Online (Free)
  • Download ($19.00)
  • Subscribe ($19.50)
  • Cite this Item
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Convergence Rate of Sieve Estimates
Preview not available

Abstract

In this paper, we develop a general theory for the convergence rate of sieve estimates, maximum likelihood estimates (MLE's) and related estimates obtained by optimizing certain empirical criteria in general parameter spaces. In many cases, especially when the parameter space is infinite dimensional, maximization over the whole parameter space is undesirable. In such cases, one has to perform maximization over an approximating space (sieve) of the original parameter space and allow the size of the approximating space to grow as the sample size increases. This method is called the method of sieves. In the case of the maximum likelihood estimation, an MLE based on a sieve is called a sieve MLE. We found that the convergence rate of a sieve estimate is governed by (a) the local expected values, variances and L2 entropy of the criterion differences and (b) the approximation error of the sieve. A robust nonparametric regression problem, a mixture problem and a nonparametric regression problem are discussed as illustrations of the theory. We also found that when the underlying space is too large, the estimate based on optimizing over the whole parameter space may not achieve the best possible rates of convergence, whereas the sieve estimate typically does not suffer from this difficulty.

Page Thumbnails

  • Thumbnail: Page 
580
    580
  • Thumbnail: Page 
581
    581
  • Thumbnail: Page 
582
    582
  • Thumbnail: Page 
583
    583
  • Thumbnail: Page 
584
    584
  • Thumbnail: Page 
585
    585
  • Thumbnail: Page 
586
    586
  • Thumbnail: Page 
587
    587
  • Thumbnail: Page 
588
    588
  • Thumbnail: Page 
589
    589
  • Thumbnail: Page 
590
    590
  • Thumbnail: Page 
591
    591
  • Thumbnail: Page 
592
    592
  • Thumbnail: Page 
593
    593
  • Thumbnail: Page 
594
    594
  • Thumbnail: Page 
595
    595
  • Thumbnail: Page 
596
    596
  • Thumbnail: Page 
597
    597
  • Thumbnail: Page 
598
    598
  • Thumbnail: Page 
599
    599
  • Thumbnail: Page 
600
    600
  • Thumbnail: Page 
601
    601
  • Thumbnail: Page 
602
    602
  • Thumbnail: Page 
603
    603
  • Thumbnail: Page 
604
    604
  • Thumbnail: Page 
605
    605
  • Thumbnail: Page 
606
    606
  • Thumbnail: Page 
607
    607
  • Thumbnail: Page 
608
    608
  • Thumbnail: Page 
609
    609
  • Thumbnail: Page 
610
    610
  • Thumbnail: Page 
611
    611
  • Thumbnail: Page 
612
    612
  • Thumbnail: Page 
613
    613
  • Thumbnail: Page 
614
    614
  • Thumbnail: Page 
615
    615