HOME

TheInfoList



OR:

In
numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics ...
, the Chebyshev iteration is an
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
for determining the solutions of a system of linear equations. The method is named after
Russia Russia (, , ), or the Russian Federation, is a transcontinental country spanning Eastern Europe and Northern Asia. It is the largest country in the world, with its internationally recognised territory covering , and encompassing one-eigh ...
n mathematician
Pafnuty Chebyshev Pafnuty Lvovich Chebyshev ( rus, Пафну́тий Льво́вич Чебышёв, p=pɐfˈnutʲɪj ˈlʲvovʲɪtɕ tɕɪbɨˈʂof) ( – ) was a Russian mathematician and considered to be the founding father of Russian mathematics. Chebysh ...
. Chebyshev iteration avoids the computation of
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
s as is necessary for the other nonstationary methods. For some distributed-memory architectures these inner products are a bottleneck with respect to efficiency. The price one pays for avoiding inner products is that the method requires enough knowledge about spectrum of the coefficient matrix ''A'', that is an upper estimate for the upper
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denot ...
and lower estimate for the lower eigenvalue. There are modifications of the method for nonsymmetric matrices ''A''.


Example code in

MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementa ...

function = SolChebyshev002(A, b, x0, iterNum, lMax, lMin) d = (lMax + lMin) / 2; c = (lMax - lMin) / 2; preCond = eye(size(A)); % Preconditioner x = x0; r = b - A * x; for i = 1:iterNum % size(A, 1) z = linsolve(preCond, r); if (i

1) p = z; alpha = 1/d; else if (i

2) beta = (1/2) * (c * alpha)^2 alpha = 1/(d - beta / alpha); p = z + beta * p; else beta = (c * alpha / 2)^2; alpha = 1/(d - beta / alpha); p = z + beta * p; end; x = x + alpha * p; r = b - A * x; %(= r - alpha * A * p) if (norm(r) < 1e-15), break; end; % stop if necessary end; end
Code translated from and.


See also

* Iterative method. Linear systems * List of numerical analysis topics. Solving systems of linear equations * Jacobi iteration * Gauss–Seidel method *
Modified Richardson iteration Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method. We seek the ...
*
Successive over-relaxation In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly convergin ...
*
Conjugate gradient method In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iter ...
*
Generalized minimal residual method In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace wit ...
*
Biconjugate gradient method In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations :A x= b.\, Unlike the conjugate gradient method, this algorithm does not require the matrix A ...
*
Iterative Template Library Iteration is the repetition of a process in order to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. ...
*
IML++ {{Short description, Discontinued online library IML++, or the Iterative Methods Library, is a C++ library for solving linear systems of equations. It is said to be "templated" in the sense that the same source code works for dense, sparse, and dis ...
On the Convergence of Chebyshev’s Method for Multiple Polynomial Zeros
/ref>


References

*


External links






Chebyshev Iteration. Implementation on Go language
{{Numerical linear algebra Numerical linear algebra Iterative methods