HOME

TheInfoList



OR:

The conjugate residual method is an iterative numeric method used for solving
systems of linear equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in th ...
. It's a
Krylov subspace method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pr ...
very similar to the much more popular
conjugate gradient method In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterativ ...
, with similar construction and convergence properties. This method is used to solve linear equations of the form :\mathbf A \mathbf x = \mathbf b where A is an invertible and
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th ...
, and b is nonzero. The conjugate residual method differs from the closely related
conjugate gradient method In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterativ ...
primarily in that it involves more numerical operations and requires more storage, but the system matrix is only required to be Hermitian, not Hermitian positive definite. Given an (arbitrary) initial estimate of the solution \mathbf x_0, the method is outlined below: : \begin & \mathbf_0 := \text \\ & \mathbf_0 := \mathbf - \mathbf_0 \\ & \mathbf_0 := \mathbf_0 \\ & \text k \text 0:\\ & \qquad \alpha_k := \frac \\ & \qquad \mathbf_ := \mathbf_k + \alpha_k \mathbf_k \\ & \qquad \mathbf_ := \mathbf_k - \alpha_k \mathbf_k \\ & \qquad \beta_k := \frac \\ & \qquad \mathbf_ := \mathbf_ + \beta_k \mathbf_k \\ & \qquad \mathbf_ := \mathbf_ + \beta_k \mathbf_k \\ & \qquad k := k + 1 \end the iteration may be stopped once \mathbf x_k has been deemed converged. The only difference between this and the conjugate gradient method is the calculation of \alpha_k and \beta_k (plus the optional incremental calculation of \mathbf_k at the end). Note: the above algorithm can be transformed so to make only one symmetric matrix-vector multiplication in each iteration.


Preconditioning

By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method: : \begin & \mathbf x_0 := \text \\ & \mathbf r_0 := \mathbf M^(\mathbf b - \mathbf_0) \\ & \mathbf p_0 := \mathbf r_0 \\ & \text k \text 0: \\ & \qquad \alpha_k := \frac \\ & \qquad \mathbf x_ := \mathbf x_k + \alpha_k \mathbf_k \\ & \qquad \mathbf r_ := \mathbf r_k - \alpha_k \mathbf M^ \mathbf_k \\ & \qquad \beta_k := \frac \\ & \qquad \mathbf p_ := \mathbf r_ + \beta_k \mathbf_k \\ & \qquad \mathbf_ := \mathbf A \mathbf r_ + \beta_k \mathbf_k \\ & \qquad k := k + 1 \\ \end The
preconditioner In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing ...
\mathbf M^ must be symmetric positive definite. Note that the residual vector here is different from the residual vector without preconditioning.


References

*
Yousef Saad Yousef Saad (born 1950) is an I.T. Distinguished Professor of Computer Science in the Department of Computer Science and Engineering at the University of Minnesota.
, ''Iterative methods for sparse linear systems'' (2nd ed.), page 194, SIAM. {{ISBN, 978-0-89871-534-7. Numerical linear algebra Articles with example pseudocode