Successive Over-relaxation
   HOME

TheInfoList



OR:

In
numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. ...
, the method of successive over-relaxation (SOR) is a variant of the
Gauss–Seidel method In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl ...
for solving a
linear system of equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in ...
, resulting in faster convergence. A similar method can be used for any slowly converging iterative process. It was devised simultaneously by
David M. Young Jr. David M. Young Jr. (October 20, 1923 – December 21, 2008) was an American mathematician and computer scientist who was one of the pioneers in the field of modern applied mathematics, numerical analysis/scientific computing. Contributions Dr. You ...
and by Stanley P. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers. Over-relaxation methods had been used before the work of Young and Frankel. An example is the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these methods were designed for computation by human calculators, requiring some expertise to ensure convergence to the solution which made them inapplicable for programming on digital computers. These aspects are discussed in the thesis of David M. Young Jr.


Formulation

Given a square system of ''n'' linear equations with unknown x: :A\mathbf x = \mathbf b where: :A=\begin a_ & a_ & \cdots & a_ \\ a_ & a_ & \cdots & a_ \\ \vdots & \vdots & \ddots & \vdots \\a_ & a_ & \cdots & a_ \end, \qquad \mathbf = \begin x_ \\ x_2 \\ \vdots \\ x_n \end , \qquad \mathbf = \begin b_ \\ b_2 \\ \vdots \\ b_n \end. Then ''A'' can be decomposed into a
diagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δ ...
component ''D'', and strictly lower and upper triangular components ''L'' and ''U'': :A=D+L+U, where :D = \begin a_ & 0 & \cdots & 0 \\ 0 & a_ & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & a_ \end, \quad L = \begin 0 & 0 & \cdots & 0 \\ a_ & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\a_ & a_ & \cdots & 0 \end, \quad U = \begin 0 & a_ & \cdots & a_ \\ 0 & 0 & \cdots & a_ \\ \vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & 0 \end. The system of linear equations may be rewritten as: :(D+\omega L) \mathbf = \omega \mathbf - omega U + (\omega-1) D \mathbf for a constant ''ω'' > 1, called the ''relaxation factor''. The method of successive over-relaxation is an iterative technique that solves the left hand side of this expression for x, using the previous value for x on the right hand side. Analytically, this may be written as: : \mathbf^ = (D+\omega L)^ \big(\omega \mathbf - omega U + (\omega-1) D \mathbf^\big)=L_w \mathbf^+\mathbf, where \mathbf^ is the ''k''th approximation or iteration of \mathbf and \mathbf^ is the next or ''k'' + 1 iteration of \mathbf. However, by taking advantage of the triangular form of (''D''+''ωL''), the elements of x(''k''+1) can be computed sequentially using
forward substitution In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are ...
: : x^_i = (1-\omega)x^_i + \frac \left(b_i - \sum_ a_x^_j - \sum_ a_x^_j \right),\quad i=1,2,\ldots,n.


Convergence

The choice of relaxation factor ''ω'' is not necessarily easy, and depends upon the properties of the coefficient matrix. In 1947, Ostrowski proved that if A is
symmetric Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definiti ...
and
positive-definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite fu ...
then \rho(L_\omega)<1 for 0<\omega<2 . Thus, convergence of the iteration process follows, but we are generally interested in faster convergence rather than just convergence.


Convergence Rate

The convergence rate for the SOR method can be analytically derived. One needs to assume the following * the relaxation parameter is appropriate: \omega \in (0,2) * Jacobi's iteration matrix C_\text:= I-D^A has only real eigenvalues * Jacobi's method is convergent: \mu := \rho(C_\text) < 1 * the matrix decomposition A=D+L+U satisfies the property that \operatorname(\lambda D + zL + \tfracU) = \operatorname(\lambda D + L + U) for any z\in\mathbb\setminus\ and \lambda\in\mathbb. Then the convergence rate can be expressed as : \rho(C_\omega) = \begin \frac \left( \omega \mu + \sqrt \right)^2\,, & 0 < \omega \leq \omega_\text \\ \omega -1\,, & \omega_\text < \omega < 2 \end where the optimal relaxation parameter is given by : \omega_\text := 1+ \left( \frac \right)^2\,. In particular, for \omega = 1 ( Gauss-Seidel) it holds that \rho(C_\omega)=\mu^2=\rho(C_\text)^2. The last assumption is satisfied for tridiagonal matrices since Z(\lambda D + L + U)Z^=\lambda D + zL + \tfracU for diagonal Z with entries Z_=z^ and \operatorname(\lambda D + L + U) = \operatorname(Z(\lambda D + L + U)Z^) .


Algorithm

Since elements can be overwritten as they are computed in this algorithm, only one storage vector is needed, and vector indexing is omitted. The algorithm goes as follows: Inputs: , , Output: Choose an initial guess to the solution repeat until convergence for from 1 until do set to 0 for from 1 until do if ≠ then set to end if end (-loop) set to end (-loop) check if convergence is reached end (repeat) ;Note: (1-\omega)\phi_i + \frac (b_i - \sigma) can also be written \phi_i + \omega \left( \frac - \phi_i\right), thus saving one multiplication in each iteration of the outer ''for''-loop.


Example

We are presented the linear system : \begin 4x_1 - x_2 - 6x_3 + 0x_4 &= 2, \\ -5x_1 - 4x_2 + 10x_3 + 8x_4 &= 21, \\ 0x_1 + 9x_2 + 4x_3 - 2x_4 &= -12, \\ 1x_1 + 0x_2 - 7x_3 + 5x_4 &= -6. \end To solve the equations, we choose a relaxation factor \omega = 0.5 and an initial guess vector \phi = (0, 0, 0, 0). According to the successive over-relaxation algorithm, the following table is obtained, representing an exemplary iteration with approximations, which ideally, but not necessarily, finds the exact solution, , in 38 steps. A simple implementation of the algorithm in Common Lisp is offered below. ;; Set the default floating-point format to "long-float" in order to ;; ensure correct operation on a wider range of numbers. (setf *read-default-float-format* 'long-float) (defparameter +MAXIMUM-NUMBER-OF-ITERATIONS+ 100 "The number of iterations beyond which the algorithm should cease its operation, regardless of its current solution. A higher number of iterations might provide a more accurate result, but imposes higher performance requirements.") (declaim (type (integer 0 *) +MAXIMUM-NUMBER-OF-ITERATIONS+)) (defun get-errors (computed-solution exact-solution) "For each component of the COMPUTED-SOLUTION vector, retrieves its error with respect to the expected EXACT-SOLUTION vector, returning a vector of error values. --- While both input vectors should be equal in size, this condition is not checked and the shortest of the twain determines the output vector's number of elements. --- The established formula is the following: Let resultVectorSize = min(computedSolution.length, exactSolution.length) Let resultVector = new vector of resultVectorSize For i from 0 to (resultVectorSize - 1) resultVector = exactSolution - computedSolution Return resultVector" (declare (type (vector number *) computed-solution)) (declare (type (vector number *) exact-solution)) (map '(vector number *) #'- exact-solution computed-solution)) (defun is-convergent (errors &key (error-tolerance 0.001)) "Checks whether the convergence is reached with respect to the ERRORS vector which registers the discrepancy betwixt the computed and the exact solution vector. --- The convergence is fulfilled if and only if each absolute error component is less than or equal to the ERROR-TOLERANCE, that is: For all e in ERRORS, it holds: abs(e) <= errorTolerance." (declare (type (vector number *) errors)) (declare (type number error-tolerance)) (flet ((error-is-acceptable (error) (declare (type number error)) (<= (abs error) error-tolerance))) (every #'error-is-acceptable errors))) (defun make-zero-vector (size) "Creates and returns a vector of the SIZE with all elements set to 0." (declare (type (integer 0 *) size)) (make-array size :initial-element 0.0 :element-type 'number)) (defun successive-over-relaxation (A b omega &key (phi (make-zero-vector (length b))) (convergence-check #'(lambda (iteration phi) (declare (ignore phi)) (>= iteration +MAXIMUM-NUMBER-OF-ITERATIONS+)))) "Implements the successive over-relaxation (SOR) method, applied upon the linear equations defined by the matrix A and the right-hand side vector B, employing the relaxation factor OMEGA, returning the calculated solution vector. --- The first algorithm step, the choice of an initial guess PHI, is represented by the optional keyword parameter PHI, which defaults to a zero-vector of the same structure as B. If supplied, this vector will be destructively modified. In any case, the PHI vector constitutes the function's result value. --- The terminating condition is implemented by the CONVERGENCE-CHECK, an optional predicate lambda(iteration phi) => generalized-boolean which returns T, signifying the immediate termination, upon achieving convergence, or NIL, signaling continuant operation, otherwise. In its default configuration, the CONVERGENCE-CHECK simply abides the iteration's ascension to the ``+MAXIMUM-NUMBER-OF-ITERATIONS+'', ignoring the achieved accuracy of the vector PHI." (declare (type (array number (* *)) A)) (declare (type (vector number *) b)) (declare (type number omega)) (declare (type (vector number *) phi)) (declare (type (function ((integer 1 *) (vector number *)) *) convergence-check)) (let ((n (array-dimension A 0))) (declare (type (integer 0 *) n)) (loop for iteration from 1 by 1 do (loop for i from 0 below n by 1 do (let ((rho 0)) (declare (type number rho)) (loop for j from 0 below n by 1 do (when (/= j i) (let ((a j (aref A i j)) (phi (aref phi j))) (incf rho (* a jphi )))) (setf (aref phi i) (+ (* (- 1 omega) (aref phi i)) (* (/ omega (aref A i i)) (- (aref b i) rho)))))) (format T "~&~d. solution = ~a" iteration phi) ;; Check if convergence is reached. (when (funcall convergence-check iteration phi) (return)))) (the (vector number *) phi)) ;; Summon the function with the exemplary parameters. (let ((A (make-array (list 4 4) :initial-contents '(( 4 -1 -6 0 ) ( -5 -4 10 8 ) ( 0 9 4 -2 ) ( 1 0 -7 5 )))) (b (vector 2 21 -12 -6)) (omega 0.5) (exact-solution (vector 3 -2 2 1))) (successive-over-relaxation A b omega :convergence-check #'(lambda (iteration phi) (declare (type (integer 0 *) iteration)) (declare (type (vector number *) phi)) (let ((errors (get-errors phi exact-solution))) (declare (type (vector number *) errors)) (format T "~&~d. errors = ~a" iteration errors) (or (is-convergent errors :error-tolerance 0.0) (>= iteration +MAXIMUM-NUMBER-OF-ITERATIONS+)))))) A simple Python implementation of the pseudo-code provided above. import numpy as np def sor_solver(A, b, omega, initial_guess, convergence_criteria): """ This is an implementation of the pseudo-code provided in the Wikipedia article. Arguments: A: nxn numpy matrix. b: n dimensional numpy vector. omega: relaxation factor. initial_guess: An initial solution guess for the solver to start with. convergence_criteria: The maximum discrepancy acceptable to regard the current solution as fitting. Returns: phi: solution vector of dimension n. """ step = 0 phi = initial_guess residual = np.linalg.norm(np.matmul(A, phi) - b) # Initial residual while residual > convergence_criteria: for i in range(A.shape : sigma = 0 for j in range(A.shape : if j != i: sigma += A
, j The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
* phi phi = (1 - omega) * phi + (omega / A
, i The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
* (b - sigma) residual = np.linalg.norm(np.matmul(A, phi) - b) step += 1 print("Step Residual: ".format(step, residual)) return phi # An example case that mirrors the one in the Wikipedia article residual_convergence = 1e-8 omega = 0.5 # Relaxation factor A = np.array( 4, -1, -6, 0 5, -4, 10, 8 , 9, 4, -2 , 0, -7, 5) b = np.array( , 21, -12, -6 initial_guess = np.zeros(4) phi = sor_solver(A, b, omega, initial_guess, residual_convergence) print(phi)


Symmetric successive over-relaxation

The version for symmetric matrices ''A'', in which : U=L^T,\, is referred to as
Symmetric Successive Over-Relaxation In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner. If the original matrix can be split into diagonal, lower and upper triangular as A=D+L+L^\mathsf then the SSOR preconditioner matrix is defined as M=(D+L) D^ ...
, or (SSOR), in which :P=\left(\frac+L\right)\fracD^\left(\frac+U\right), and the iterative method is :\mathbf^=\mathbf^k-\gamma^k P^(A\mathbf^k-\mathbf),\ k \ge 0. The SOR and SSOR methods are credited to
David M. Young Jr. David M. Young Jr. (October 20, 1923 – December 21, 2008) was an American mathematician and computer scientist who was one of the pioneers in the field of modern applied mathematics, numerical analysis/scientific computing. Contributions Dr. You ...


Other applications of the method

A similar technique can be used for any iterative method. If the original iteration had the form :x_=f(x_n) then the modified version would use :x^\mathrm_=(1-\omega)x^_n+\omega f(x^\mathrm_n). However, the formulation presented above, used for solving systems of linear equations, is not a special case of this formulation if is considered to be the complete vector. If this formulation is used instead, the equation for calculating the next vector will look like : \mathbf^ = (1-\omega)\mathbf^ + \omega L_*^ (\mathbf - U\mathbf^), where L_* = L + D. Values of \omega>1 are used to speed up convergence of a slow-converging process, while values of \omega<1 are often used to help establish convergence of a diverging iterative process or speed up the convergence of an overshooting process. There are various methods that adaptively set the relaxation parameter \omega based on the observed behavior of the converging process. Usually they help to reach a super-linear convergence for some problems but fail for the others.


See also

*
Jacobi method In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The ...
* Gaussian Belief Propagation *
Matrix splitting In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods (for example, for systems of differential equations) depen ...


Notes


References

* * Abraham Berman,
Robert J. Plemmons Robert James Plemmons (born December 18, 1938) is an American mathematician specializing in computational mathematics. He is the Emeritus Z. Smith Reynolds Professor of Mathematics and Computer Science at Wake Forest University. In 1979, Plemmon ...
, ''Nonnegative Matrices in the Mathematical Sciences'', 1994, SIAM. . * * A. Hadjidimos,
Successive overrelaxation (SOR) and related methods
', Journal of Computational and Applied Mathematics 123 (2000), 177–199. *
Yousef Saad Yousef Saad (born 1950) is an I.T. Distinguished Professor of Computer Science in the Department of Computer Science and Engineering at the University of Minnesota.
,
Iterative Methods for Sparse Linear Systems
', 1st edition, PWS, 1996.

s copy of "Templates for the Solution of Linear Systems", by Barrett et al. *
Richard S. Varga Richard Steven Varga (October 9, 1928 - February 25, 2022) was an American mathematician who specialized in numerical analysis and linear algebra. He was an Emeritus University Professor of Mathematical Sciences at Kent State University and an a ...
2002 ''Matrix Iterative Analysis'', Second ed. (of 1962 Prentice Hall edition), Springer-Verlag. *
David M. Young Jr. David M. Young Jr. (October 20, 1923 – December 21, 2008) was an American mathematician and computer scientist who was one of the pioneers in the field of modern applied mathematics, numerical analysis/scientific computing. Contributions Dr. You ...
''Iterative Solution of Large Linear Systems'', Academic Press, 1971. (reprinted by Dover, 2003)


External links


Module for the SOR Method

Tridiagonal linear system solver
based on SOR, in C++ {{Numerical linear algebra Numerical linear algebra Articles with example pseudocode Relaxation (iterative methods)