Numerical precision is the very soul of science.
— SIR D'ARCY WENTWORTH THOMPSON, On Growth and Form (1942)
There will always be a small but steady demand for error-analysts to … expose bad algorithms' big errors and, more important, supplant bad algorithms with provably good ones.
— WILLIAM M. KAHAN, Interval Arithmetic Options in the Proposed IEEE Floating Point Arithmetic Standard (1980)
Since none of the numbers which we take out from logarithmic and trigonometric tables admit of absolute precision, but are all to a certain extent approximate only, the results of all calculations performed by the aid of these numbers can only be approximately true … It may happen, that in special cases the effect of the errors of the tables is so augmented that we may be obliged to reject a method, otherwise the best, and substitute another in its place.
— CARL FRIEDRICH GAUSS, Theoria Motus (1809)
Backward error analysis is no panacea; it may explain errors but not excuse them.
— HEWLETT-PACKARD, HP-15C Advanced Functions Handbook (1982)
This book is concerned with the effects of finite precision arithmetic on numerical algorithms, particularly those in numerical linear algebra. Central to any understanding of high-level algorithms is an appreciation of the basic concepts of finite precision arithmetic. This opening chapter briskly imparts the necessary background material. Various examples are used for illustration, some of them familiar (such as the quadratic equation) but several less well known. Common misconceptions and myths exposed during the chapter are highlighted towards the end, in §1.19.
This chapter has few prerequisites and few assumptions are made about the nature of the finite precision arithmetic (for example, the base, number of digits, or mode of rounding, or even whether it is floating point arithmetic). The second chapter deals in detail with the specifics of floating point arithmetic.
A word of warning: some of the examples from §1.12 onward are special ones chosen to illustrate particular phenomena. You may never see in practice the extremes of behaviour shown here. Let the examples show you what can happen, but do not let them destroy your confidence in finite precision arithmetic!
1.1. Notation and Background
We describe the notation used in the book and briefly set up definitions needed for this chapter.
Generally, we use
capital letters A, B, C, Δ, Λ for matrices,
subscripted lower case letters aij, bij, cij, δij, λij for matrix elements,
lower case letters x, y, z, c, g, h for vectors,
lower case Greek letters α, β, γ, θ,π for scalars,
following the widely used convention originally introduced by Householder [644, 1964].
The vector space of all real m × n matrices is denoted by ℝm × n and the vector space of real n-vectors by ℝn. Similarly, ℂm × n denotes the vector space of complex m × n matrices. A superscript “T” denotes transpose and a superscript “*” conjugate transpose.
Algorithms are expressed using a pseudocode based on the MATLAB language [576, 2000], [824]. Comments begin with the % symbol.
Submatrices are specified with the colon notation, as used in MATLAB and Fortran 90/95: A(p: q, r: s) denotes the submatrix of A formed by the intersection of rows p to q and columns r to s. As a special case, a lone colon as the row or column specifier means to take all entries in that row or column; thus A(:, j) is the jth column of A, and A(i, :) the ith row. The values taken by an integer variable are also described using the colon notation: “i = 1: n” means the same as “i = 1, 2, …, n”.
Evaluation of an expression in floating point arithmetic is denoted fl(·), and we assume that the basic arithmetic operations op = +, −, *, / satisfy fl (x op y)= (x op y) (1+δ) , |δ| ≤u. 1.1