Iterative refinement is an
iterative method proposed by
James H. Wilkinson
James Hardy Wilkinson FRS (27 September 1919 – 5 October 1986) was a prominent figure in the field of numerical analysis, a field at the boundary of applied mathematics and computer science particularly useful to physics and engineering.
Edu ...
to improve the accuracy of numerical solutions to
systems of linear equations.
When solving a linear system
due to the compounded accumulation of
rounding errors, the computed solution
may sometimes deviate from the exact solution
Starting with
iterative refinement computes a sequence
which converges to
when certain assumptions are met.
Description
For
the th iteration of iterative refinement consists of three steps:
:
The crucial reasoning for the refinement algorithm is that although the solution for in step (ii) may indeed be troubled by similar errors as the first solution,
, the calculation of the residual in step (i), in comparison, is numerically nearly exact: You may not know the right answer very well, but you know quite accurately just how far the solution you have in hand is from producing the correct outcome (). If the residual is small in some sense, then the correction must also be small, and should at the very least steer the current estimate of the answer, , closer to the desired one,
The iterations will stop on their own when the residual is zero, or close enough to zero that the corresponding correction is too small to change the solution which produced it; alternatively, the algorithm stops when is too small to convince the linear algebraist monitoring the progress that it is worth continuing with any further refinements.
Note that the matrix equation solved in step (ii) uses the same matrix
for each iterations. If the matrix equation is solved using a direct method, such as
Cholesky or
LU decomposition, the numerically expensive factorization of
is done once and is reused for the relatively inexpensive
forward
Forward is a relative direction, the opposite of backward.
Forward may also refer to:
People
* Forward (surname)
Sports
* Forward (association football)
* Forward (basketball), including:
** Point forward
** Power forward (basketball)
** Sm ...
and
back substitution
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are ...
to solve for at each iteration.
[
]
Error analysis
As a rule of thumb, iterative refinement for Gaussian elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
produces a solution correct to working precision if double the working precision is used in the computation of , e.g. by using quad
Quad as a word or prefix usually means 'four'. It may refer to:
Government
* Quadrilateral Security Dialogue, a strategic security dialogue between Australia, India, Japan, and the United States
* Quadrilateral group, an informal group which inc ...
or double extended precision IEEE 754
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found i ...
floating point, and if is not too ill-conditioned (and the iteration and the rate of convergence are determined by the condition number of ).
More formally, assuming that each step (ii) can be solved reasonably accurately, i.e., in mathematical terms, for every , we have
:
where , the relative error
The approximation error in a data value is the discrepancy between an exact value and some ''approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute er ...
in the th iterate of iterative refinement satisfies
:
where
* denotes the -norm of a vector,
* is the -condition number
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input ...
of ,
* is the order of ,
* and are unit round-offs of floating-point arithmetic
In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
operations,
* , and are constants that depend on , and
if is "not too badly conditioned", which in this context means
:
and implies that and are of order unity.
The distinction of and is intended to allow mixed-precision evaluation of where intermediate results are computed with unit round-off before the final result is rounded (or truncated) with unit round-off . All other computations are assumed to be carried out with unit round-off .
References
{{Numerical linear algebra
Numerical linear algebra