Access

You are not currently logged in.

Access JSTOR through your library or other institution:

login

Log in through your institution.

If You Use a Screen Reader

This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Journal Article

High-Dimensional Graphs and Variable Selection with the Lasso

Nicolai Meinshausen and Peter Bühlmann
The Annals of Statistics
Vol. 34, No. 3 (Jun., 2006), pp. 1436-1462
Stable URL: http://www.jstor.org/stable/25463463
Page Count: 27
  • Read Online (Free)
  • Download ($19.00)
  • Subscribe ($19.50)
  • Add to My Lists
  • Cite this Item
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
High-Dimensional Graphs and Variable Selection with the Lasso
Preview not available

Abstract

The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs. Neighborhood selection estimates the conditional independence restrictions separately for each node in the graph and is hence equivalent to variable selection for Gaussian linear models. We show that the proposed neighborhood selection scheme is consistent for sparse high-dimensional graphs. Consistency hinges on the choice of the penalty parameter. The oracle value for optimal prediction does not lead to a consistent neighborhood estimate. Controlling instead the probability of falsely joining some distinct connectivity components of the graph, consistent estimation for sparse graphs is achieved (with exponential rates), even when the number of variables grows as the number of observations raised to an arbitrary power.

Page Thumbnails

  • Thumbnail: Page 
1436
    1436
  • Thumbnail: Page 
1437
    1437
  • Thumbnail: Page 
1438
    1438
  • Thumbnail: Page 
1439
    1439
  • Thumbnail: Page 
1440
    1440
  • Thumbnail: Page 
1441
    1441
  • Thumbnail: Page 
1442
    1442
  • Thumbnail: Page 
1443
    1443
  • Thumbnail: Page 
1444
    1444
  • Thumbnail: Page 
1445
    1445
  • Thumbnail: Page 
1446
    1446
  • Thumbnail: Page 
1447
    1447
  • Thumbnail: Page 
1448
    1448
  • Thumbnail: Page 
1449
    1449
  • Thumbnail: Page 
1450
    1450
  • Thumbnail: Page 
1451
    1451
  • Thumbnail: Page 
1452
    1452
  • Thumbnail: Page 
1453
    1453
  • Thumbnail: Page 
1454
    1454
  • Thumbnail: Page 
1455
    1455
  • Thumbnail: Page 
1456
    1456
  • Thumbnail: Page 
1457
    1457
  • Thumbnail: Page 
1458
    1458
  • Thumbnail: Page 
1459
    1459
  • Thumbnail: Page 
1460
    1460
  • Thumbnail: Page 
1461
    1461
  • Thumbnail: Page 
1462
    1462