Chebyshev Iteration
   HOME
*





Chebyshev Iteration
In numerical linear algebra, the Chebyshev iteration is an iterative method for determining the solutions of a system of linear equations. The method is named after Russian mathematician Pafnuty Chebyshev. Chebyshev iteration avoids the computation of inner products as is necessary for the other nonstationary methods. For some distributed-memory architectures these inner products are a bottleneck with respect to efficiency. The price one pays for avoiding inner products is that the method requires enough knowledge about spectrum of the coefficient matrix ''A'', that is an upper estimate for the upper eigenvalue and lower estimate for the lower eigenvalue. There are modifications of the method for nonsymmetric matrices ''A''. Example code in MATLAB function = SolChebyshev002(A, b, x0, iterNum, lMax, lMin) d = (lMax + lMin) / 2; c = (lMax - lMin) / 2; preCond = eye(size(A)); % Preconditioner x = x0; r = b - A * x; for i = 1:iterNum % size(A, 1) z = l ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Numerical Linear Algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social sciences ar ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gauss–Seidel Method
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either strictly diagonally dominant, or symmetric and positive definite. It was only mentioned in a private letter from Gauss to his student Gerling in 1823. A publication was not delivered before 1874 by Seidel. Description The Gauss–Seidel method is an iterative technique for solving a square system of linear equations with unknown : A\mathbf x = \mathbf b . It is defined by the iteration L_* \mathbf^ = \mathbf - U \mathbf^, where \mathbf^ is the -th approximation or iteration of \mathbf,\,\mathbf^ is the next or -th it ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


IML++
{{Short description, Discontinued online library IML++, or the Iterative Methods Library, is a C++ library for solving linear systems of equations. It is said to be "templated" in the sense that the same source code works for dense, sparse, and distributed matrices. Some of the supported solutions methods are: * Richardson Iteration * Chebyshev Iteration In numerical linear algebra, the Chebyshev iteration is an iterative method for determining the solutions of a system of linear equations. The method is named after Russian mathematician Pafnuty Chebyshev. Chebyshev iteration avoids the computatio ... * Conjugate Gradient (CG) * Conjugate Gradient Squared (CGS) * BiConjugate Gradient (BiCG) * BiConjugate Gradient Stabilized (BiCGSTAB) * Generalized Minimum Residual (GMRES) * Quasi-Minimal Residual Without Lookahead (QMR) Status IML++ was developed by the National Institute of Standards and Technology, and is in the public domain. However, it is no longer being actively d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Iterative Template Library
Iteration is the repetition of a process in order to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. In mathematics and computer science, iteration (along with the related technique of recursion) is a standard element of algorithms. Mathematics In mathematics, iteration may refer to the process of iterated function, iterating a function, i.e. applying a function repeatedly, using the output from one iteration as the input to the next. Iteration of apparently simple functions can produce complex behaviors and difficult problems – for examples, see the Collatz conjecture and juggler sequences. Another use of iteration in mathematics is in iterative methods which are used to produce approximate numerical solutions to certain mathematical problems. Newton's method is an example of an iterative method. Manual calculation of a number's s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Biconjugate Gradient Method
In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations :A x= b.\, Unlike the conjugate gradient method, this algorithm does not require the matrix A to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose . The algorithm # Choose initial guess x_0\,, two other vectors x_0^* and b^*\, and a preconditioner M\, # r_0 \leftarrow b-A\, x_0\, # r_0^* \leftarrow b^*-x_0^*\, A # p_0 \leftarrow M^ r_0\, # p_0^* \leftarrow r_0^*M^\, # for k=0, 1, \ldots do ## \alpha_k \leftarrow \, ## x_ \leftarrow x_k + \alpha_k \cdot p_k\, ## x_^* \leftarrow x_k^* + \overline\cdot p_k^*\, ## r_ \leftarrow r_k - \alpha_k \cdot A p_k\, ## r_^* \leftarrow r_k^*- \overline \cdot p_k^*\, A ## \beta_k \leftarrow \, ## p_ \leftarrow M^ r_ + \beta_k \cdot p_k\, ## p_^* \leftarrow r_^*M^ + \overline\cdot p_k^*\, In the above formulation, the computed r_k\, and r_k^* satisfy :r ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Generalized Minimal Residual Method
In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector. The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986. It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975. The MINRES method requires that the matrix is symmetric, but has the advantage that it only requires handling of three vectors. GMRES is a special case of the DIIS method developed by Peter Pulay in 1980. DIIS is applicable to non-linear systems. The method Denote the Euclidean norm of any vector v by \, v\, . Denote the (square) system of linear equations to be solved by : Ax = b. \, The matrix ''A'' is assumed to be invertible of size ''m''-by-''m''. Furthermore, it is assumed that b is norm ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Conjugate Gradient Method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems. The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel, who programmed it on the Z4, and extensively researched it. The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems. Description of the problem addressed by co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Successive Over-relaxation
In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process. It was devised simultaneously by David M. Young Jr. and by Stanley P. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers. Over-relaxation methods had been used before the work of Young and Frankel. An example is the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these methods were designed for computation by human calculators, requiring some expertise to ensure convergence to the solution which made them inapplicable for programming on digital computers. These aspects are discussed in the thesis of David M. Young Jr. Formulation Given a square system of ''n'' linear equations with unknown x: :A\mathbf x = \mathbf b where: :A=\begin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Modified Richardson Iteration
Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method. We seek the solution to a set of linear equations, expressed in matrix terms as : A x = b.\, The Richardson iteration is : x^ = x^ + \omega \left( b - A x^ \right), where \omega is a scalar parameter that has to be chosen such that the sequence x^ converges. It is easy to see that the method has the correct fixed points, because if it converges, then x^ \approx x^ and x^ has to approximate a solution of A x = b. Convergence Subtracting the exact solution x, and introducing the notation for the error e^ = x^-x, we get the equality for the errors : e^ = e^ - \omega A e^ = (I-\omega A) e^. Thus, : \, e^\, = \, (I-\omega A) e^\, \leq \, I-\omega A\, \, e^\, , for any vector norm and the corresponding induced matrix norm. Thus, i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Jacobi Iteration
In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi. Description Let :A\mathbf x = \mathbf b be a square system of ''n'' linear equations, where: A = \begin a_ & a_ & \cdots & a_ \\ a_ & a_ & \cdots & a_ \\ \vdots & \vdots & \ddots & \vdots \\a_ & a_ & \cdots & a_ \end, \qquad \mathbf = \begin x_ \\ x_2 \\ \vdots \\ x_n \end , \qquad \mathbf = \begin b_ \\ b_2 \\ \vdots \\ b_n \end. Then ''A'' can be decomposed into a diagonal component ''D'', a lower triangular part ''L'' and an upper triangular part ''U'': :A=D+L+U \qquad \text \qquad D = \begin a_ & 0 & \cdots & 0 \\ 0 & a_ & \cdot ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Iterative Method
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the previous ones. A specific implementation of an iterative method, including the Algorithm#Termination, termination criteria, is an algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common. In contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of rounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equations A\mathbf=\mathbf by Gaussian elimination). Iterative methods are often the only cho ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


List Of Numerical Analysis Topics
This is a list of numerical analysis topics. General *Validated numerics * Iterative method *Rate of convergence — the speed at which a convergent sequence approaches its limit **Order of accuracy — rate at which numerical solution of differential equation converges to exact solution * Series acceleration — methods to accelerate the speed of convergence of a series **Aitken's delta-squared process — most useful for linearly converging sequences **Minimum polynomial extrapolation — for vector sequences **Richardson extrapolation **Shanks transformation — similar to Aitken's delta-squared process, but applied to the partial sums **Van Wijngaarden transformation — for accelerating the convergence of an alternating series *Abramowitz and Stegun — book containing formulas and tables of many special functions **Digital Library of Mathematical Functions — successor of book by Abramowitz and Stegun *Curse of dimensionality *Local convergence and global convergence — whet ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]