Abstract
I recommend this method to you for imitation. You will hardly ever again eliminate directly at least not when you have more than 2 unknowns. The indirect [iterative] procedure can be done while half asleep, or while thinking about other things.
— CARL FRIEDRICH GAUSS, Letter to C. L. Gerling (1823)
The iterative method is commonly called the “Seidel process,” or the “Gauss-Seidel process.” But, as Ostrowski (1952) points out, Seidel (1874) mentions the process but advocates not using it. Gauss nowhere mentions it.
— GEORGE E. FORSYTHE, Solving Linear Algebraic Equations Can Be Interesting (1953)
The spurious contributions in null(A) grow at worst linearly and if the rounding errors are small the scheme can be quite effective.
— HERBERT B. KELLER, On the Solution of Singular and Semidefinite Linear Systems by Iteration (1965)
Iterative methods for solving linear systems have a long history, going back at least to Gauss. Table 17.1 shows the dates of publication of selected methods. It is perhaps surprising, then, that rounding error analysis for iterative methods is not well developed. There are two main reasons for the paucity of error analysis. One is that in many applications accuracy requirements are modest and are satisfied without difficulty, resulting in little demand for error analysis. Certainly there is no point in computing an answer to greater accuracy than that determined by the data, and in scientific and engineering applications the data often has only a few correct digits. The second reason is that rounding error analysis for iterative methods is inherently more difficult than for direct methods, and the bounds that are obtained are harder to interpret.
In this chapter we consider a simple but important class of iterative methods, stationary iterative methods, for which a reasonably comprehensive error analysis can be given. The basic question that our analysis attempts to answer is, “What is the limiting accuracy of a method in floating point arithmetic?” Specifically, “How small can we guarantee that the backward or forward error will be over all iterations k = 1,2, …?” Without an answer to this question we cannot be sure that a convergence test of the form ‖b − A k‖ ≤ ϵ (say) will ever be satisfied, for any given value of ϵ < ‖b − Ax0‖!
As an indication of the potentially devastating effects of rounding errors we present an example constructed and discussed by Hammarling and Wilkinson [541, 1976]. Here, A is the 100 × 100 lower bidiagonal matrix with aii = 1.5 and ai, i-1 ≡ 1, and bi ≡ 2.5. The successive overrelaxation (SOR) method is applied in MATLAB with parameter ω = 1.5, starting with the rounded version of the exact solution x, given by xi = 1 − (−2/3)i. The forward errors ‖ k − x‖∞/‖x‖∞ and the ∞-norm backward errors ηA,b( k) are plotted in Figure 17.1. The SOR method converges in exact arithmetic, since the iteration matrix has spectral radius 1/2, but in the presence of rounding errors it diverges. The iterate 238 has a largest element of order 1013, k +2 ≡ k for k ≥ 238, and for k > 100, k(60: 100) ≈ (−1)k 100(60: 100). The divergence is not a result of ill conditioning of A, since κ∞(A) ≈ 5. The reason for the initial rapid growth of the errors in this example is that the iteration matrix is far from normal; this allows the norms of its powers to become very large before they ultimately decay by a factor ≈ 1/2 with each successive power.