Abstract

We consider three mathematically equivalent variants of the conjugate gradient (CG) algorithm and how they perform in finite precision arithmetic. It was shown in [Greenbaum, Lin. Alg. Appl., 113 (1989), pp. 7--63] that under certain conditions involving local orthogonality and approximate satisfaction of a recurrence formula, that may be satisfied by a finite precision CG computation, the convergence of that computation is like that of exact CG for a matrix with many eigenvalues distributed throughout tiny intervals about the eigenvalues of the given matrix. We determine to what extent each of these variants satisfies the desired conditions, using a set of test problems, and show that there is significant correlation between how well these conditions are satisfied and how well the finite precision computation converges before reaching its ultimately attainable accuracy. We show that for problems where the width of the intervals containing the eigenvalues of the associated exact CG matrix makes a significant difference in the behavior of exact CG, the different CG variants behave differently in finite precision arithmetic. For problems where the interval width makes little difference or where the convergence of exact CG is essentially governed by the upper bound based on the square root of the condition number of the matrix, the different CG variants converge similarly in finite precision arithmetic until the ultimate level of accuracy is achieved, although this ultimate level of accuracy may be different for the different variants. This points to the need for testing new CG variants on problems that are especially sensitive to rounding errors.

Keywords

  1. conjuate gradients
  2. finite precision
  3. parallel

MSC codes

  1. 65F10
  2. 65G50

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
T. J. Ashby, P. Ghysels, W. Heirman, and W. Vanroose, The impact of global communication latency at extreme scales on Krylov methods, in Algorithms and Architectures for Parallel Processing, Y. Xiang, I. Stojmenovic, B. O. Apduhan, G. Wang, K. Nakano, and A. Zomaya, eds., Springer, Berlin, Heidelberg, 2012, pp. 428--442.
2.
E. K. Blum, Numerical Analysis and Computation: Theory and Practice, Addison-Wesley, Philippines, 1972.
3.
E. C. Carson, M. Rozložník, Z. Strakoš, P. Tichý, and M. T\ruma, The numerical stability analysis of pipelined conjugate gradient methods: Historical context and methodology, SIAM J. Sci. Comput., 40 (2018), pp. A3549--A3580.
4.
A. T. Chronopoulos and C. W. Gear, $s$-step iterative methods for symmetric linear systems, J. Comput. Appl. Math., 25 (1989), pp. 153--168.
5.
A. T. Chronopoulos and C. W. Gear, On the efficient implementation of preconditioned $s$-step conjugate gradient methods on multiprocessors with memory hierarchy, Parallel Comput., 11 (1989), pp. 37--53.
6.
S. Cools, E. F. Yetkin, E. Agullo, L. Giraud, and W. Vanroose, Analyzing the effect of local rounding error propagation on the maximal attainable accuracy of the pipelined conjugate gradients method, SIAM J. Matrix Anal. Appl., 39 (2017), pp. 426--450.
7.
S. Cools, and W. Vanroose, Numerically Stable Variants of the Communication-hiding Pipelined Conjugate Gradients Algorithm for the Parallel Solution of Large Scale Symmetric Linear Systems, preprint, arXiv:1706.05988v2, 2018.
8.
S. Cools, J. Cornelis, and W. Vanroose, Numerically stable recurrence relations for the communication hiding pipelined conjugate gradient method, IEEE Trans. Parallel Distrib. Syst., 30 (2019), pp. 2507--2522.
9.
V. Druskin, A. Greenbaum, and L. Knizhnerman, Using nonorthogonal Lanczos vectors in the computation of matrix functions, SIAM J. Sci. Comput. 19 (1998), pp. 38-54.
10.
I. Duff, R. Grimes, and J. Lewis, Users' Guide for the Harwell-Boeing Sparse Matrix Collection (release I), 1992.
11.
P. Ghysels and W. Vanroose, Hiding global synchronization latency in the preconditioned conjugate gradient algorithm, Parallel Comput., 40 (2014), pp. 224--238.
12.
A. Greenbaum, Comparison of splittings used with the conjugate gradient algorithm, Num. Math., 33 (1979), pp. 181--194.
13.
A. Greenbaum, Behavior of slightly perturbed Lanczos and conjugate-gradient recurrences, Lin. Alg. Appl., 113 (1989), pp. 7--63.
14.
A. Greenbaum, Iterative Methods for Solving Linear Systems, SIAM, Philadelphia, 1997.
15.
A. Greenbaum and Z. Strakos̆, Predicting the behavior of finite precision Lanczos and conjugate gradienti computations, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 121--137.
16.
M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952), pp. 409--436.
17.
N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, Philadelphia, PA, 1996.
18.
G. Meurant, Multitasking the conjugate gradient method on the Cray x-mp/48, Parallel Comput., 5 (1987), pp. 267--280.
19.
G. Meurant, On prescribing the convergence behavior of the conjugate gradient algorithm, Numer. Algorithms, 84 (2020), pp. 1353--1380.
20.
C. C. Paige, Accuracy and effectiveness of the Lanczos algorithm for the symmetric eigenproblem, Lin. Alg. Appl., 33 (1980), pp. 235--258.
21.
C. C. Paige, The Computation of Eigenvalues and Eigenvectors of Very Large Sparse Matrices, Ph.D. dissertation, Univ. of London, 1971.
22.
C. C. Paige, An augmented stability result for the Lanczos Hermitian matrix tridiagonalization process, SIAM J. Matrix Anal. Appl., 31 (2010), pp. 2347--2359.
23.
C. C. Paige, Accuracy of the Lanczos process for the eigenproblem and solution of equations, SIAM J. Matrix Anal. Appl., 40 (2019), pp. 1371--1398.
24.
B. Parlett, The Symmetric Eigenvalue Problem, Prentice-Hall, Englewood Cliffs, NJ, 1980.
25.
J. van Rosendale, Minimizing Inner Product Data Dependencies in Conjugate Gradient Iteration, ICPP, 1983.
26.
Y. Saad, Practical use of polynomial preconditionings for the conjugate gradient method, SIAM J. Sci. Stat. Comput., 6 (1985), pp. 865--881.
27.
Y. Saad, Krylov subspace methods on supercomputers, SIAM J. Sci. Stat. Comput., 10 (1989), pp. 1200--1232.
28.
Z. Strakos̆ and P. Tichý, On error estimation in the conjugate gradient method and why it works in finite precision computations, ETNA, 13 (2002), pp. 56--80.

Information & Authors

Information

Published In

cover image SIAM Journal on Scientific Computing
SIAM Journal on Scientific Computing
Pages: S496 - S515
ISSN (online): 1095-7197

History

Submitted: 17 June 2020
Accepted: 31 March 2021
Published online: 15 July 2021

Keywords

  1. conjuate gradients
  2. finite precision
  3. parallel

MSC codes

  1. 65F10
  2. 65G50

Authors

Affiliations

Funding Information

National Science Foundation https://doi.org/10.13039/100000001 : DMS-1210886

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited By

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share on social media

The SIAM Publications Library now uses SIAM Single Sign-On for individuals. If you do not have existing SIAM credentials, create your SIAM account https://my.siam.org.