Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If you need an accessible version of this item please contact JSTOR User Support

Least Angle Regression

Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani
The Annals of Statistics
Vol. 32, No. 2 (Apr., 2004), pp. 407-451
Stable URL: http://www.jstor.org/stable/3448465
Page Count: 45
  • Read Online (Free)
  • Download ($19.00)
  • Subscribe ($19.50)
  • Cite this Item
If you need an accessible version of this item please contact JSTOR User Support
Least Angle Regression
Preview not available

Abstract

The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

Page Thumbnails

  • Thumbnail: Page 
407
    407
  • Thumbnail: Page 
408
    408
  • Thumbnail: Page 
409
    409
  • Thumbnail: Page 
410
    410
  • Thumbnail: Page 
411
    411
  • Thumbnail: Page 
412
    412
  • Thumbnail: Page 
413
    413
  • Thumbnail: Page 
414
    414
  • Thumbnail: Page 
415
    415
  • Thumbnail: Page 
416
    416
  • Thumbnail: Page 
417
    417
  • Thumbnail: Page 
418
    418
  • Thumbnail: Page 
419
    419
  • Thumbnail: Page 
420
    420
  • Thumbnail: Page 
421
    421
  • Thumbnail: Page 
422
    422
  • Thumbnail: Page 
423
    423
  • Thumbnail: Page 
424
    424
  • Thumbnail: Page 
425
    425
  • Thumbnail: Page 
426
    426
  • Thumbnail: Page 
427
    427
  • Thumbnail: Page 
428
    428
  • Thumbnail: Page 
429
    429
  • Thumbnail: Page 
430
    430
  • Thumbnail: Page 
431
    431
  • Thumbnail: Page 
432
    432
  • Thumbnail: Page 
433
    433
  • Thumbnail: Page 
434
    434
  • Thumbnail: Page 
435
    435
  • Thumbnail: Page 
436
    436
  • Thumbnail: Page 
437
    437
  • Thumbnail: Page 
438
    438
  • Thumbnail: Page 
439
    439
  • Thumbnail: Page 
440
    440
  • Thumbnail: Page 
441
    441
  • Thumbnail: Page 
442
    442
  • Thumbnail: Page 
443
    443
  • Thumbnail: Page 
444
    444
  • Thumbnail: Page 
445
    445
  • Thumbnail: Page 
446
    446
  • Thumbnail: Page 
447
    447
  • Thumbnail: Page 
448
    448
  • Thumbnail: Page 
449
    449
  • Thumbnail: Page 
450
    450
  • Thumbnail: Page 
451
    451