Alternating Direction Implicit Method
   HOME



picture info

Alternating Direction Implicit Method
In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester equation, Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and Control theory, control, and can be formulated to construct solutions in a memory-efficient, factored form. It is also used to numerically solve Parabolic partial differential equation, parabolic and Elliptic partial differential equation, elliptic partial differential equations, and is a classic method used for modeling heat conduction and solving the diffusion equation in two or more dimensions.. It is an example of an operator splitting method. The method was developed at Humble Oil in the mid-1950s by Jim Douglas Jr, Henry Rachford, and Don Peaceman. ADI for matrix equations The method The ADI method is a two step iteration process that alternately updates the column and row spaces of an approximate solution to AX - XB = ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Numerical Linear Algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social scienc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Eigendecomposition Of A Matrix
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem. Fundamental theory of matrix eigenvectors and eigenvalues A (nonzero) vector of dimension is an eigenvector of a square matrix if it satisfies a linear equation of the form \mathbf \mathbf = \lambda \mathbf for some scalar . Then is called the eigenvalue corresponding to . Geometrically speaking, the eigenvectors of are the vectors that merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem. This yields an equation for the eigenvalues p\left(\lambda\right) = ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Symmetric Matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if a_ denotes the entry in the ith row and jth column then for all indices i and j. Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Incomplete Cholesky Factorization
In numerical analysis, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization. An incomplete Cholesky factorization is often used as a preconditioner for algorithms like the conjugate gradient method. The Cholesky factorization of a positive definite matrix ''A'' is ''A'' = ''LL''* where ''L'' is a lower triangular matrix. An incomplete Cholesky factorization is given by a sparse lower triangular matrix ''K'' that is in some sense close to ''L''. The corresponding preconditioner is ''KK''*. One popular way to find such a matrix ''K'' is to use the algorithm for finding the exact Cholesky decomposition in which ''K'' has the same sparsity pattern as ''A'' (any entry of ''K'' is set to zero if the corresponding entry in ''A'' is also zero). This gives an incomplete Cholesky factorization which is as sparse as the matrix ''A''. Motivation Consider the following matrix as an example: \mathbf=\begin 5 & - ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Conjugate Gradient Method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems. The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel, who programmed it on the Z4, and extensively researched it. The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems. Description of the problem addres ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

System Of Linear Equations
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variable (math), variables. For example, : \begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in the three variables . A ''Solution (mathematics), solution'' to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the Tuple, ordered triple (x,y,z)=(1,-2,-2), since it makes all three equations valid. Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A Nonlinear system, system of non-linear equations can often be Approximation, approximated by a linear system (see linea ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Band Matrix
In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal ''band'', comprising the main diagonal and zero or more diagonals on either side. Band matrix Bandwidth Formally, consider an ''n''×''n'' matrix ''A''=(''a''''i,j'' ). If all matrix elements are zero outside a diagonally bordered band whose range is determined by constants ''k''1 and ''k''2: :a_=0 \quad\mbox\quad ji+k_2; \quad k_1, k_2 \ge 0.\, then the quantities ''k''1 and ''k''2 are called the and , respectively. The of the matrix is the maximum of ''k''1 and ''k''2; in other words, it is the number ''k'' such that a_=0 if , i-j, > k . Examples *A band matrix with ''k''1 = ''k''2 = 0 is a diagonal matrix, with bandwidth 0. *A band matrix with ''k''1 = ''k''2 = 1 is a tridiagonal matrix, with bandwidth 1. *For ''k''1 = ''k''2 = 2 one has a pentadiagonal matrix and so on. * Triangular matrices **For ''k''1 = 0, ''k''2 = ' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Von Neumann Stability Analysis
In numerical analysis, von Neumann stability analysis (also known as Fourier stability analysis) is a procedure used to check the stability of finite difference schemes as applied to linear partial differential equations. The analysis is based on the Fourier decomposition of numerical error and was developed at Los Alamos National Laboratory after having been briefly described in a 1947 article by British researchers John Crank and Phyllis Nicolson. This method is an example of explicit time integration where the function that defines governing equation is evaluated at the current time. Later, the method was given a more rigorous treatment in an article co-authored by John von Neumann. Numerical stability The stability of numerical schemes is closely associated with numerical error. A finite difference scheme is stable if the errors made at one time step of the calculation do not cause the errors to be magnified as the computations are continued. A ''neutrally stable scheme'' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Tridiagonal Matrix Algorithm
In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve Tridiagonal matrix, tridiagonal systems of equations. A tridiagonal system for ''n'' unknowns may be written as :a_i x_ + b_i x_i + c_i x_ = d_i, where a_1 = 0 and c_n = 0. : \begin b_1 & c_1 & & & 0 \\ a_2 & b_2 & c_2 & & \\ & a_3 & b_3 & \ddots & \\ & & \ddots & \ddots & c_ \\ 0 & & & a_n & b_n \end \begin x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end = \begin d_1 \\ d_2 \\ d_3 \\ \vdots \\ d_n \end . For such systems, the solution can be obtained in O(n) operations instead of O(n^3) required by Gaussian elimination. A first sweep eliminates the a_i's, and then an (abbreviated) backward substitution produces the solution. Examples of such matrice ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Crank–Nicolson Method
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a Big O notation, second-order method in time. It is Explicit and implicit methods, implicit in time, can be written as an implicit Runge–Kutta method, and it is Numerical stability, numerically stable. The method was developed by John Crank and Phyllis Nicolson in the 1940s. For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally Numerical stability, stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step \Delta t times the thermal diffusivity to the square of space step, \Delta x^2, is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Yegor Ivanovich Zolotarev
Yegor (Egor) Ivanovich Zolotaryov () (31 March 1847, Saint Petersburg – 19 July 1878, Saint Petersburg) was a Russian mathematician. Biography Yegor was born as a son of Agafya Izotovna Zolotaryova and the merchant Ivan Vasilevich Zolotaryov in Saint Petersburg, Imperial Russia. In 1857 he began to study at the fifth St Petersburg gymnasium, a school which centred on mathematics and natural science. He finished it with the silver medal in 1863. In the same year he was allowed to be an auditor at the physico-mathematical faculty of St Petersburg university. He had not been able to become a student before 1864 because he was too young. Among his academic teachers were Somov, Chebyshev and Aleksandr Korkin, with whom he would have a tight scientific friendship. In November 1867 he defended his Kandidat thesis ''“About the Integration of Gyroscope Equations”'', after 10 months there followed his thesis pro venia legendi ''About one question on Minima''. With this wor ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]