Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If you need an accessible version of this item please contact JSTOR User Support

The Conjugate Gradient Method and Trust Regions in Large Scale Optimization

Trond Steihaug
SIAM Journal on Numerical Analysis
Vol. 20, No. 3 (Jun., 1983), pp. 626-637
Stable URL: http://www.jstor.org/stable/2157277
Page Count: 12
  • Subscribe ($19.50)
  • Cite this Item
If you need an accessible version of this item please contact JSTOR User Support
The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
Preview not available

Abstract

Algorithms based on trust regions have been shown to be robust methods for unconstrained optimization problems. All existing methods, either based on the dogleg strategy or Hebden-Moré iterations, require solution of system of linear equations. In large scale optimization this may be prohibitively expensive. It is shown in this paper that an approximate solution of the trust region problem may be found by the preconditioned conjugate gradient method. This may be regarded as a generalized dogleg technique where we asymptotically take the inexact quasi-Newton step. We also show that we have the same convergence properties as existing methods based on the dogleg strategy using an approximate Hessian.

Page Thumbnails

  • Thumbnail: Page 
626
    626
  • Thumbnail: Page 
627
    627
  • Thumbnail: Page 
628
    628
  • Thumbnail: Page 
629
    629
  • Thumbnail: Page 
630
    630
  • Thumbnail: Page 
631
    631
  • Thumbnail: Page 
632
    632
  • Thumbnail: Page 
633
    633
  • Thumbnail: Page 
634
    634
  • Thumbnail: Page 
635
    635
  • Thumbnail: Page 
636
    636
  • Thumbnail: Page 
637
    637