Access

You are not currently logged in.

Access JSTOR through your library or other institution:

login

Log in through your institution.

If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Journal Article

The Dantzig Selector: Statistical Estimation When p Is Much Larger than n

Emmanuel Candes and Terence Tao
The Annals of Statistics
Vol. 35, No. 6 (Dec., 2007), pp. 2313-2351
Stable URL: http://www.jstor.org/stable/25464587
Page Count: 39
Were these topics helpful?
See something inaccurate? Let us know!

Select the topics that are inaccurate.

Cancel
  • Read Online (Free)
  • Download ($19.00)
  • Subscribe ($19.50)
  • Add to My Lists
  • Cite this Item
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
The Dantzig Selector: Statistical Estimation When p Is Much Larger than n
Preview not available

Abstract

In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Xβ + z, where $\beta \in {\bf R}^{p}$ is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n « p, and the $z_{i}\text{'}{\rm s}$ are i.i.d. N(0, σ²). Is it possible to estimate β reliably based on the noisy data y? To estimate β, we introduce a new estimator-we call it the Dantzig selector-which is a solution to the l₁-regularization problem $\underset \tilde{\beta}\in {\bf R}^{p}\to{{\rm min}}\|\tilde{\beta}\|_{\ell _{1}}$ subject to $\|X^{\ast }r\|_{\ell _{\infty}}\leq (1+t^{-1})\sqrt{2\,{\rm log}\,p}\cdot \sigma $, where r is the residual vector $y-X\tilde{\beta}$ and t is a positive scalar. We show that if X obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector β is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, $\|\hat{\beta}-\beta \|_{\ell _{2}}^{2}\leq C^{2}\cdot 2\,{\rm log}\,p\cdot \left(\sigma ^{2}+\sum_{i}{\rm min}(\beta _{i}^{2},\sigma ^{2})\right)$. Our results are nonasymptotic and we give values for the constant C. Even though n may be much smaller than p, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).

Page Thumbnails

  • Thumbnail: Page 
2313
    2313
  • Thumbnail: Page 
2314
    2314
  • Thumbnail: Page 
2315
    2315
  • Thumbnail: Page 
2316
    2316
  • Thumbnail: Page 
2317
    2317
  • Thumbnail: Page 
2318
    2318
  • Thumbnail: Page 
2319
    2319
  • Thumbnail: Page 
2320
    2320
  • Thumbnail: Page 
2321
    2321
  • Thumbnail: Page 
2322
    2322
  • Thumbnail: Page 
2323
    2323
  • Thumbnail: Page 
2324
    2324
  • Thumbnail: Page 
2325
    2325
  • Thumbnail: Page 
2326
    2326
  • Thumbnail: Page 
2327
    2327
  • Thumbnail: Page 
2328
    2328
  • Thumbnail: Page 
2329
    2329
  • Thumbnail: Page 
2330
    2330
  • Thumbnail: Page 
2331
    2331
  • Thumbnail: Page 
2332
    2332
  • Thumbnail: Page 
2333
    2333
  • Thumbnail: Page 
2334
    2334
  • Thumbnail: Page 
2335
    2335
  • Thumbnail: Page 
2336
    2336
  • Thumbnail: Page 
2337
    2337
  • Thumbnail: Page 
2338
    2338
  • Thumbnail: Page 
2339
    2339
  • Thumbnail: Page 
2340
    2340
  • Thumbnail: Page 
2341
    2341
  • Thumbnail: Page 
2342
    2342
  • Thumbnail: Page 
2343
    2343
  • Thumbnail: Page 
2344
    2344
  • Thumbnail: Page 
2345
    2345
  • Thumbnail: Page 
2346
    2346
  • Thumbnail: Page 
2347
    2347
  • Thumbnail: Page 
2348
    2348
  • Thumbnail: Page 
2349
    2349
  • Thumbnail: Page 
2350
    2350
  • Thumbnail: Page 
2351
    2351