## Access

You are not currently logged in.

Access JSTOR through your library or other institution:

## If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Journal Article

# The Dantzig Selector: Statistical Estimation When p Is Much Larger than n

Emmanuel Candes and Terence Tao
The Annals of Statistics
Vol. 35, No. 6 (Dec., 2007), pp. 2313-2351
Stable URL: http://www.jstor.org/stable/25464587
Page Count: 39

#### Select the topics that are inaccurate.

Cancel
Preview not available

## Abstract

In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Xβ + z, where $\beta \in {\bf R}^{p}$ is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n « p, and the $z_{i}\text{'}{\rm s}$ are i.i.d. N(0, σ²). Is it possible to estimate β reliably based on the noisy data y? To estimate β, we introduce a new estimator-we call it the Dantzig selector-which is a solution to the l₁-regularization problem $\underset \tilde{\beta}\in {\bf R}^{p}\to{{\rm min}}\|\tilde{\beta}\|_{\ell _{1}}$ subject to $\|X^{\ast }r\|_{\ell _{\infty}}\leq (1+t^{-1})\sqrt{2\,{\rm log}\,p}\cdot \sigma$, where r is the residual vector $y-X\tilde{\beta}$ and t is a positive scalar. We show that if X obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector β is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, $\|\hat{\beta}-\beta \|_{\ell _{2}}^{2}\leq C^{2}\cdot 2\,{\rm log}\,p\cdot \left(\sigma ^{2}+\sum_{i}{\rm min}(\beta _{i}^{2},\sigma ^{2})\right)$. Our results are nonasymptotic and we give values for the constant C. Even though n may be much smaller than p, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).

• 2313
• 2314
• 2315
• 2316
• 2317
• 2318
• 2319
• 2320
• 2321
• 2322
• 2323
• 2324
• 2325
• 2326
• 2327
• 2328
• 2329
• 2330
• 2331
• 2332
• 2333
• 2334
• 2335
• 2336
• 2337
• 2338
• 2339
• 2340
• 2341
• 2342
• 2343
• 2344
• 2345
• 2346
• 2347
• 2348
• 2349
• 2350
• 2351