Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.

Parameter Tuning in Pointwise Adaptation Using a Propagation Approach

Vladimir Spokoiny and Céline Vial
The Annals of Statistics
Vol. 37, No. 5B (Oct., 2009), pp. 2783-2807
Stable URL: http://www.jstor.org/stable/30243728
Page Count: 25
  • Read Online (Free)
  • Download ($19.00)
  • Subscribe ($19.50)
  • Cite this Item
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Parameter Tuning in Pointwise Adaptation Using a Propagation Approach
Preview not available

Abstract

This paper discusses the problem of adaptive estimation of a univariate object like the value of a regression function at a given point or a linear functional in a linear inverse problem. We consider an adaptive procedure originated from Lepski [Theory Probab. Appl. 35 (1990) 454-466.] that selects in a data-driven way one estimate out of a given class of estimates ordered by their variability. A serious problem with using this and similar procedures is the choice of some tuning parameters like thresholds. Numerical results show that the theoretically recommended proposals appear to be too conservative and lead to a strong oversmoothing effect. A careful choice of the parameters of the procedure is extremely important for getting the reasonable quality of estimation. The main contribution of this paper is the new approach for choosing the parameters of the procedure by providing the prescribed behavior of the resulting estimate in the simple parametric situation. We establish a non-asymptotical "oracle" bound, which shows that the estimation risk is, up to a logarithmic multiplier, equal to the risk of the "oracle" estimate that is optimally selected from the given family. A numerical study demonstrates a good performance of the resulting procedure in a number of simulated examples.

Page Thumbnails

  • Thumbnail: Page 
2783
    2783
  • Thumbnail: Page 
2784
    2784
  • Thumbnail: Page 
2785
    2785
  • Thumbnail: Page 
2786
    2786
  • Thumbnail: Page 
2787
    2787
  • Thumbnail: Page 
2788
    2788
  • Thumbnail: Page 
2789
    2789
  • Thumbnail: Page 
2790
    2790
  • Thumbnail: Page 
2791
    2791
  • Thumbnail: Page 
2792
    2792
  • Thumbnail: Page 
2793
    2793
  • Thumbnail: Page 
2794
    2794
  • Thumbnail: Page 
2795
    2795
  • Thumbnail: Page 
2796
    2796
  • Thumbnail: Page 
2797
    2797
  • Thumbnail: Page 
2798
    2798
  • Thumbnail: Page 
2799
    2799
  • Thumbnail: Page 
2800
    2800
  • Thumbnail: Page 
2801
    2801
  • Thumbnail: Page 
2802
    2802
  • Thumbnail: Page 
2803
    2803
  • Thumbnail: Page 
2804
    2804
  • Thumbnail: Page 
2805
    2805
  • Thumbnail: Page 
2806
    2806
  • Thumbnail: Page 
2807
    2807