Abstract

The minimum residual method (MINRES) of Paige and Saunders [SIAM J. Numer. Anal., 12 (1975), pp. 617--629], which is often the method of choice for symmetric linear systems, is a generalization of the conjugate residual method (CR), proposed by Hestenes and Stiefel [J. Res. Natl. Bur. Stand. (U.S.), 49 (1952), pp. 409--436]. Like the conjugate gradient method (CG), CR possesses properties that are desirable for unconstrained optimization, but is only defined for symmetric positive-definite operators. CR's main property, that it minimizes the residual, is particularly appealing in inexact Newton methods for optimization, typically used in a linesearch context. CR is also relevant in a trust-region context as it causes a monotonic decrease of convex quadratic models [D. C.-L. Fong and M. A. Saunders, SQU J. Sci., 17 (2012), pp. 44--62]. We investigate modifications that make CR suitable, even in the presence of negative curvature, and perform comparisons on convex and nonconvex problems with CG. We complete our investigation with an extension suitable for nonlinear least-squares problems. Our experiments reveal that CR performs as well as or better than CG, and mainly yields savings in operator-vector products.

Keywords

  1. unconstrained optimization
  2. conjugate residual method
  3. inexact Newton method
  4. trust-region method
  5. conjugate gradient method

MSC codes

  1. 49M15
  2. 49M37
  3. 65F10
  4. 65F20
  5. 65K05
  6. 90C30

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
Å. Björck (1979), Use of conjugate gradients for solving linear least squares problems, in Conjugate Gradient Methods and Similar Techniques, I. S. Duff, ed., AERE, Harwell, UK, pp. 49--71.
2.
Å. Björck and M. A. Saunders (2017), Krylov Subspace Algorithms for Overdetermined and Underdetermined Linear Systems, Technical report, Stanford University, Stanford, CA.
3.
Å. Björck, T. Elfving, and Z. Strakos̆ (1998), Stability of conjugate gradient and Lanczos methods for linear least squares problems, SIAM J. Matrix Anal. Appl., 19, pp. 720--736, https://doi.org/10.1137/S089547989631202X.
4.
A. R. Conn, N. I. M. Gould, and Ph. L. Toint (2000), Trust-Region Methods, MOS-SIAM Ser. Optim. 1, SIAM, Philadelphia, PA, https://doi.org/10.1137/1.9780898719857.
5.
R. S. Dembo and T. Steihaug (1983), Truncated-Newton algorithms for large-scale unconstrained optimization, Math. Program., 26, pp. 190--212.
6.
J. E. Dennis, Jr., and J. J. Moré (1977), Quasi-Newton methods, motivation and theory, SIAM Rev., 19, pp. 46--89.
7.
D. C.-L. Fong (2011), Minimum-Residual Methods for Sparse Least-Squares Using Golub--Kahan Bidiagonalization, Ph.D. thesis, Stanford University, Stanford, CA.
8.
D. C.-L. Fong and M. A. Saunders (2011), LSMR: An iterative algorithm for sparse least-squares problems, SIAM J. Sci. Comput., 33, pp. 2950--2971, https://doi.org/10.1137/10079687X.
9.
D. C.-L. Fong and M. A. Saunders (2012), CG versus MINRES: An empirical comparison, SQU J. Sci., 17, pp. 44--62.
10.
R. Fourer, C. Maheshwari, A. Neumaier, D. Orban, and H. Schichl (2010), Convexity and concavity detection in computational graphs, INFORMS J. Comput., 22, pp. 26--43, https://doi.org/10.1287/ijoc.1090.0321.
11.
G. H. Golub and W. Kahan (1965), Calculating the singular values and pseudo-inverse of a matrix, SIAM J. Numer. Anal., 2, pp. 205--224, https://doi.org/10.1137/0702016.
12.
N. I. M. Gould, D. Orban, and Ph. L. Toint (2015), CUTEst: A Constrained and Unconstrained Testing Environment with safe threads for mathematical optimization, Comput. Optim. Appl., 60, pp. 545--557, https://doi.org/10.1007/s10589-014-9687-3.
13.
M. R. Hestenes and E. Stiefel (1952), Methods of conjugate gradients for solving linear systems, J. Res. Natl. Bur. Stand. (U.S.), 49, pp. 409--436.
14.
T. Kloek (2012), Conjugate Gradients and Conjugate Residuals Type Methods for Solving Least Squares Problems from Tomography, Bachelor's thesis, TU Delft, Delft, Netherlands.
15.
C. Lanczos (1950), An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, J. Res. Natl. Bur. Stand. (U.S.), 45, pp. 225--280.
16.
K. Levenberg (1944), A method for the solution of certain problems in least squares, Q. J. Appl. Math., 2, pp. 164--168.
17.
C.-J. Lin and J. J. Moré (1998), Newton's method for large bound-constrained optimization problems, SIAM J. Optim., 9, pp. 1100--1127, https://doi.org/10.1137/S1052623498345075.
18.
D. G. Luenberger (1970), The conjugate residual method for constrained minimization problems, SIAM J. Numer. Anal., 7, pp. 390--398.
19.
L. Lukšan, C. Matonoha, and J. Vlček (2010), Modified CUTE Problems for Sparse Unconstrained Optimization, Technical report 1081, Institute of Computer Science, Academy of Science of the Czech Republic, Prague.
20.
D. Marquardt (1963), An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Ind. Appl. Math., 11, pp. 431--441, https://doi.org/10.1137/0111030.
21.
C. C. Paige and M. A. Saunders (1975), Solution of sparse indefinite systems of linear equations, SIAM J. Numer. Anal., 12, pp. 617--629, https://doi.org/10.1137/0712047.
22.
C. C. Paige and M. A. Saunders (1982), LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Softw., 8, pp. 43--71.
23.
T. Steihaug (1983), The conjugate gradient method and trust regions in large scale optimization, SIAM J. Numer. Anal., 20, pp. 626--637, https://doi.org/10.1137/0720042.
24.
E. Stiefel (1955), Relaxationsmethoden bester Strategie zur Lösung linearer Gleichungssysteme, Comment. Math. Helv., 29, pp. 157--179.
25.
Y. Yuan (2000), On the truncated conjugate gradient method, Math. Program., 87, pp. 561--573, https://doi.org/10.1007/s101070050012.

Information & Authors

Information

Published In

cover image SIAM Journal on Optimization
SIAM Journal on Optimization
Pages: 1988 - 2025
ISSN (online): 1095-7189

History

Submitted: 31 July 2018
Accepted: 4 April 2019
Published online: 25 July 2019

Keywords

  1. unconstrained optimization
  2. conjugate residual method
  3. inexact Newton method
  4. trust-region method
  5. conjugate gradient method

MSC codes

  1. 49M15
  2. 49M37
  3. 65F10
  4. 65F20
  5. 65K05
  6. 90C30

Authors

Affiliations

Funding Information

Natural Sciences and Engineering Research Council of Canada https://doi.org/10.13039/501100000038

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

View options

PDF

View PDF

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media