You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
SIAM Journal on Numerical Analysis
Vol. 20, No. 3 (Jun., 1983), pp. 626-637
Published by: Society for Industrial and Applied Mathematics
Stable URL: http://www.jstor.org/stable/2157277
Page Count: 12
Preview not available
Algorithms based on trust regions have been shown to be robust methods for unconstrained optimization problems. All existing methods, either based on the dogleg strategy or Hebden-Moré iterations, require solution of system of linear equations. In large scale optimization this may be prohibitively expensive. It is shown in this paper that an approximate solution of the trust region problem may be found by the preconditioned conjugate gradient method. This may be regarded as a generalized dogleg technique where we asymptotically take the inexact quasi-Newton step. We also show that we have the same convergence properties as existing methods based on the dogleg strategy using an approximate Hessian.
SIAM Journal on Numerical Analysis © 1983 Society for Industrial and Applied Mathematics